Compare commits

..

1 Commits

Author SHA1 Message Date
Kim Morrison
921472c67e initial exploration for a ExtHashMapD 2025-05-19 13:24:18 +10:00
8109 changed files with 88448 additions and 346607 deletions

View File

@@ -1,47 +0,0 @@
To build Lean you should use `make -j -C build/release`.
To run a test you should use `cd tests/lean/run && ./test_single.sh example_test.lean`.
## New features
When asked to implement new features:
* begin by reviewing existing relevant code and tests
* write comprehensive tests first (expecting that these will initially fail)
* and then iterate on the implementation until the tests pass.
All new tests should go in `tests/lean/run/`. These tests don't have expected output; we just check there are no errors. You should use `#guard_msgs` to check for specific messages.
## Success Criteria
*Never* report success on a task unless you have verified both a clean build without errors, and that the relevant tests pass.
## Build System Safety
**NEVER manually delete build directories** (build/, stage0/, stage1/, etc.) even when builds fail.
- ONLY use the project's documented build command: `make -j -C build/release`
- If a build is broken, ask the user before attempting any manual cleanup
## LSP and IDE Diagnostics
After rebuilding, LSP diagnostics may be stale until the user interacts with files. Trust command-line test results over IDE diagnostics.
## Update prompting when the user is frustrated
If the user expresses frustration with you, stop and ask them to help update this `.claude/CLAUDE.md` file with missing guidance.
## Creating pull requests
Follow the commit convention in `doc/dev/commit_convention.md`.
**Title format:** `<type>: <subject>` where type is one of: `feat`, `fix`, `doc`, `style`, `refactor`, `test`, `chore`, `perf`.
Subject should use imperative present tense ("add" not "added"), no capitalization, no trailing period.
**Body format:** The first paragraph must start with "This PR". This paragraph is automatically incorporated into release notes. Use imperative present tense. Include motivation and contrast with previous behavior when relevant.
Example:
```
feat: add optional binder limit to `mkPatternFromTheorem`
This PR adds a `num?` parameter to `mkPatternFromTheorem` to control how many
leading quantifiers are stripped when creating a pattern.
```

View File

@@ -1,70 +0,0 @@
# Release Management Command
Execute the release process for a given version by running the release checklist and following its instructions.
## Before Starting
**IMPORTANT**: Before beginning the release process, read the in-file documentation:
- Read `script/release_checklist.py` for what the checklist script does
- Read `script/release_steps.py` for what the release steps script does
These comments explain the scripts' behavior, which repositories get special handling, and how errors are handled.
## Arguments
- `version`: The version to release (e.g., v4.24.0)
## Process
1. Run `script/release_checklist.py {version}` to check the current status
2. **CRITICAL: If preliminary lean4 checks fail, STOP immediately and alert the user**
- Check for: release branch exists, CMake version correct, tag exists, release page exists, release notes exist
- **IMPORTANT**: The release page is created AUTOMATICALLY by CI after pushing the tag - DO NOT create it manually
- Do NOT create any PRs or proceed with repository updates if these checks fail
3. Create a todo list tracking all repositories that need updates
4. **CRITICAL RULE: You can ONLY run `release_steps.py` for a repository if `release_checklist.py` explicitly says to do so**
- The checklist output will say "Run `script/release_steps.py {version} {repo_name}` to create it"
- If a repository shows "🟡 Dependencies not ready", you CANNOT create a PR for it yet
- You MUST rerun `release_checklist.py` before attempting to create PRs for any new repositories
5. For each repository that the checklist says needs updating:
- Run `script/release_steps.py {version} {repo_name}` to create the PR
- Mark it complete when the PR is created
6. After creating PRs, notify the user which PRs need review and merging
7. **MANDATORY: Rerun `release_checklist.py` to check current status**
- Do this after creating each batch of PRs
- Do this after the user reports PRs have been merged
- NEVER assume a repository is ready without checking the checklist output
8. As PRs are merged and tagged, dependent repositories will become ready
9. Continue the cycle: run checklist → create PRs for ready repos → wait for merges → repeat
10. Continue until all repositories are updated and the release is complete
## Important Notes
- **NEVER merge PRs autonomously** - always wait for the user to merge PRs themselves
- The `release_steps.py` script is idempotent - it's safe to rerun
- The `release_checklist.py` script is idempotent - it's safe to rerun
- Some repositories depend on others (e.g., mathlib4 depends on batteries, aesop, etc.)
- Wait for user to merge PRs before dependent repos can be updated
- Alert user if anything unusual or scary happens
- Use appropriate timeouts for long-running builds (verso can take 10+ minutes)
- ProofWidgets4 uses semantic versioning (v0.0.X) - it's okay to create and push the next sequential tag yourself when needed for a release
## PR Status Reporting
Every time you run `release_checklist.py`, you MUST:
1. Parse the output to identify ALL open PRs mentioned (lines with "✅ PR with title ... exists")
2. Provide a summary to the user listing ALL open PRs that need review
3. Group them by status:
- PRs for repositories that are blocked by dependencies (show these but note they're blocked)
- PRs for repositories that are ready to merge (highlight these)
4. Format the summary clearly with PR numbers and URLs
This summary should be provided EVERY time you run the checklist, not just after creating new PRs.
The user needs to see the complete picture of what's waiting for review.
## Error Handling
**CRITICAL**: If something goes wrong or a command fails:
- **DO NOT** try to manually reproduce the failing steps yourself
- **DO NOT** try to fix things by running git commands or other manual operations
- Both scripts are idempotent and designed to handle partial completion gracefully
- If a script continues to fail after retrying, report the error to the user and wait for instructions

7
.gitattributes vendored
View File

@@ -4,10 +4,3 @@ RELEASES.md merge=union
stage0/** binary linguist-generated
# The following file is often manually edited, so do show it in diffs
stage0/src/stdlib_flags.h -binary -linguist-generated
doc/std/grove/GroveStdlib/Generated/** linguist-generated
# These files should not have line endings translated on Windows, because
# it throws off parser tests. Later lines override earlier ones, so the
# runner code is still treated as ordinary text.
tests/lean/docparse/* eol=lf
tests/lean/docparse/*.lean eol=auto
tests/lean/docparse/*.sh eol=auto

View File

@@ -9,7 +9,7 @@ assignees: ''
### Prerequisites
<!-- Please put an X between the brackets as you perform the following steps: -->
Please put an X between the brackets as you perform the following steps:
* [ ] Check that your issue is not already filed:
https://github.com/leanprover/lean4/issues

View File

@@ -1,5 +0,0 @@
self-hosted-runner:
labels:
- nscloud-ubuntu-22.04-amd64-4x16
- nscloud-ubuntu-22.04-amd64-8x16
- nscloud-macos-sonoma-arm64-6x14

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v4
- name: actionlint
uses: raven-actions/actionlint@v2
with:

View File

@@ -1,38 +0,0 @@
name: Check awaiting-manual label
on:
merge_group:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
check-awaiting-manual:
runs-on: ubuntu-latest
steps:
- name: Check awaiting-manual label
id: check-awaiting-manual-label
if: github.event_name == 'pull_request'
uses: actions/github-script@v8
with:
script: |
const { labels, number: prNumber } = context.payload.pull_request;
const hasAwaiting = labels.some(label => label.name == "awaiting-manual");
const hasBreaks = labels.some(label => label.name == "breaks-manual");
const hasBuilds = labels.some(label => label.name == "builds-manual");
if (hasAwaiting && hasBreaks) {
core.setFailed('PR has both "awaiting-manual" and "breaks-manual" labels.');
} else if (hasAwaiting && !hasBreaks && !hasBuilds) {
core.info('PR is marked "awaiting-manual" but neither "breaks-manual" nor "builds-manual" labels are present.');
core.setOutput('awaiting', 'true');
}
- name: Wait for manual compatibility
if: github.event_name == 'pull_request' && steps.check-awaiting-manual-label.outputs.awaiting == 'true'
run: |
echo "::notice title=Awaiting manual::PR is marked 'awaiting-manual' but neither 'breaks-manual' nor 'builds-manual' labels are present."
echo "This check will remain in progress until the PR is updated with appropriate manual compatibility labels."
# Keep the job running indefinitely to show "in progress" status
while true; do
sleep 3600 # Sleep for 1 hour at a time
done

View File

@@ -10,29 +10,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check awaiting-mathlib label
id: check-awaiting-mathlib-label
if: github.event_name == 'pull_request'
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
const { labels, number: prNumber } = context.payload.pull_request;
const hasAwaiting = labels.some(label => label.name == "awaiting-mathlib");
const hasBreaks = labels.some(label => label.name == "breaks-mathlib");
const hasBuilds = labels.some(label => label.name == "builds-mathlib");
if (hasAwaiting && hasBreaks) {
core.setFailed('PR has both "awaiting-mathlib" and "breaks-mathlib" labels.');
} else if (hasAwaiting && !hasBreaks && !hasBuilds) {
core.info('PR is marked "awaiting-mathlib" but neither "breaks-mathlib" nor "builds-mathlib" labels are present.');
core.setOutput('awaiting', 'true');
const { labels } = context.payload.pull_request;
if (labels.some(label => label.name == "awaiting-mathlib") && !labels.some(label => label.name == "builds-mathlib")) {
core.setFailed('PR is marked "awaiting-mathlib" but "builds-mathlib" label has not been applied yet by the bot');
}
- name: Wait for mathlib compatibility
if: github.event_name == 'pull_request' && steps.check-awaiting-mathlib-label.outputs.awaiting == 'true'
run: |
echo "::notice title=Awaiting mathlib::PR is marked 'awaiting-mathlib' but neither 'breaks-mathlib' nor 'builds-mathlib' labels are present."
echo "This check will remain in progress until the PR is updated with appropriate mathlib compatibility labels."
# Keep the job running indefinitely to show "in progress" status
while true; do
sleep 3600 # Sleep for 1 hour at a time
done

View File

@@ -3,6 +3,9 @@ name: build-template
on:
workflow_call:
inputs:
check-level:
type: string
required: true
config:
type: string
required: true
@@ -33,7 +36,7 @@ jobs:
include: ${{fromJson(inputs.config)}}
# complete all jobs
fail-fast: false
runs-on: ${{ endsWith(matrix.os, '-with-cache') && fromJSON(format('["{0}", "nscloud-git-mirror-1gb"]', matrix.os)) || matrix.os }}
runs-on: ${{ matrix.os }}
defaults:
run:
shell: ${{ matrix.shell || 'nix develop -c bash -euxo pipefail {0}' }}
@@ -66,16 +69,10 @@ jobs:
brew install ccache tree zstd coreutils gmp libuv
if: runner.os == 'macOS'
- name: Checkout
if: (!endsWith(matrix.os, '-with-cache'))
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
# the default is to use a virtual merge commit between the PR and master: just use the PR
ref: ${{ github.event.pull_request.head.sha }}
- name: Namespace Checkout
if: endsWith(matrix.os, '-with-cache')
uses: namespacelabs/nscloud-checkout-action@v7
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Open Nix shell once
run: true
if: runner.os == 'Linux'
@@ -85,7 +82,7 @@ jobs:
- name: CI Merge Checkout
run: |
git fetch --depth=1 origin ${{ github.sha }}
git checkout FETCH_HEAD flake.nix flake.lock script/prepare-* tests/lean/run/importStructure.lean
git checkout FETCH_HEAD flake.nix flake.lock script/prepare-*
if: github.event_name == 'pull_request'
# (needs to be after "Checkout" so files don't get overridden)
- name: Setup emsdk
@@ -100,49 +97,39 @@ jobs:
sudo apt-get update
sudo apt-get install -y gcc-multilib g++-multilib ccache libuv1-dev:i386 pkgconf:i386
if: matrix.cmultilib
- name: Restore Cache
- name: Cache
id: restore-cache
uses: actions/cache/restore@v4
with:
# NOTE: must be in sync with `save` below and with `restore-cache` in `update-stage0.yml`
# NOTE: must be in sync with `save` below
path: |
.ccache
${{ matrix.name == 'Linux Lake' && 'build/stage1/**/*.trace
build/stage1/**/*.olean*
build/stage1/**/*.olean
build/stage1/**/*.ilean
build/stage1/**/*.ir
build/stage1/**/*.c
build/stage1/**/*.c.o*' || '' }}
key: ${{ matrix.name }}-build-v4-${{ github.sha }}
key: ${{ matrix.name }}-build-v3-${{ github.event.pull_request.head.sha }}
# fall back to (latest) previous cache
restore-keys: |
${{ matrix.name }}-build-v4
${{ matrix.name }}-build-v3
# open nix-shell once for initial setup
- name: Setup
run: |
ccache --zero-stats
if: runner.os == 'Linux'
- name: Set up env
- name: Set up NPROC
run: |
echo "NPROC=$(nproc 2>/dev/null || sysctl -n hw.logicalcpu 2>/dev/null || echo 4)" >> $GITHUB_ENV
if ! diff src/stdlib_flags.h stage0/src/stdlib_flags.h; then
echo "src/stdlib_flags.h and stage0/src/stdlib_flags.h differ, will test and pack stage 2"
echo "TARGET_STAGE=stage2" >> $GITHUB_ENV
else
echo "TARGET_STAGE=stage1" >> $GITHUB_ENV
fi
- name: Build
run: |
ulimit -c unlimited # coredumps
[ -d build ] || mkdir build
cd build
# arguments passed to `cmake`
OPTIONS=(-DLEAN_EXTRA_MAKE_OPTS=-DwarningAsError=true)
if [[ -n '${{ matrix.release }}' ]]; then
# this also enables githash embedding into stage 1 library, which prohibits reusing
# `.olean`s across commits, so we don't do it in the fast non-release CI
OPTIONS+=(-DCHECK_OLEAN_VERSION=ON)
fi
# this also enables githash embedding into stage 1 library
OPTIONS=(-DCHECK_OLEAN_VERSION=ON)
OPTIONS+=(-DLEAN_EXTRA_MAKE_OPTS=-DwarningAsError=true)
if [[ -n '${{ matrix.cross_target }}' ]]; then
# used by `prepare-llvm`
export EXTRA_FLAGS=--target=${{ matrix.cross_target }}
@@ -151,9 +138,6 @@ jobs:
if [[ -n '${{ matrix.prepare-llvm }}' ]]; then
wget -q ${{ matrix.llvm-url }}
PREPARE="$(${{ matrix.prepare-llvm }})"
if [ "$TARGET_STAGE" == "stage2" ]; then
cp -r stage1 stage2
fi
eval "OPTIONS+=($PREPARE)"
fi
if [[ -n '${{ matrix.release }}' && -n '${{ inputs.nightly }}' ]]; then
@@ -168,28 +152,10 @@ jobs:
fi
# contortion to support empty OPTIONS with old macOS bash
cmake .. --preset ${{ matrix.CMAKE_PRESET || 'release' }} -B . ${{ matrix.CMAKE_OPTIONS }} ${OPTIONS[@]+"${OPTIONS[@]}"} -DLEAN_INSTALL_PREFIX=$PWD/..
time make $TARGET_STAGE -j$NPROC
# Should be done as early as possible and in particular *before* "Check rebootstrap" which
# changes the state of stage1/
- name: Save Cache
# Caching on cancellation created some mysterious issues perhaps related to improper build
# shutdown
if: steps.restore-cache.outputs.cache-hit != 'true' && !cancelled()
uses: actions/cache/save@v4
with:
# NOTE: must be in sync with `restore` above
path: |
.ccache
${{ matrix.name == 'Linux Lake' && 'build/stage1/**/*.trace
build/stage1/**/*.olean*
build/stage1/**/*.ilean
build/stage1/**/*.ir
build/stage1/**/*.c
build/stage1/**/*.c.o*' || '' }}
key: ${{ steps.restore-cache.outputs.cache-primary-key }}
time make -j$NPROC
- name: Install
run: |
make -C build/$TARGET_STAGE install
make -C build install
- name: Check Binaries
run: ${{ matrix.binary-check }} lean-*/bin/* || true
- name: Count binary symbols
@@ -213,25 +179,25 @@ jobs:
else
${{ matrix.tar || 'tar' }} cf - $dir | zstd -T0 --no-progress -o pack/$dir.tar.zst
fi
- uses: actions/upload-artifact@v5
- uses: actions/upload-artifact@v4
if: matrix.release
with:
name: build-${{ matrix.name }}
path: pack/*
- name: Lean stats
run: |
build/$TARGET_STAGE/bin/lean --stats src/Lean.lean
build/stage1/bin/lean --stats src/Lean.lean
if: ${{ !matrix.cross }}
- name: Test
id: test
run: |
ulimit -c unlimited # coredumps
time ctest --preset ${{ matrix.CMAKE_PRESET || 'release' }} --test-dir build/$TARGET_STAGE -j$NPROC --output-junit test-results.xml ${{ matrix.CTEST_OPTIONS }}
if: matrix.test
time ctest --preset ${{ matrix.CMAKE_PRESET || 'release' }} --test-dir build/stage1 -j$NPROC --output-junit test-results.xml ${{ matrix.CTEST_OPTIONS }}
if: (matrix.wasm || !matrix.cross) && (inputs.check-level >= 1 || matrix.name == 'Linux release')
- name: Test Summary
uses: test-summary/action@v2
with:
paths: build/${{ env.TARGET_STAGE }}/test-results.xml
paths: build/stage1/test-results.xml
# prefix `if` above with `always` so it's run even if tests failed
if: always() && steps.test.conclusion != 'skipped'
- name: Check Test Binary
@@ -244,7 +210,7 @@ jobs:
- name: Check Stage 3
run: |
make -C build -j$NPROC check-stage3
if: matrix.check-stage3
if: matrix.test-speedcenter
- name: Test Speedcenter Benchmarks
run: |
# Necessary for some timing metrics but does not work on Namespace runners
@@ -256,14 +222,9 @@ jobs:
if: matrix.test-speedcenter
- name: Check rebootstrap
run: |
set -e
# clean rebuild in case of Makefile changes/Lake does not detect uncommited stage 0
# changes yet
make -C build update-stage0
make -C build/stage1 clean-stdlib
time make -C build -j$NPROC
time ctest --preset ${{ matrix.CMAKE_PRESET || 'release' }} --test-dir build/stage1 -j$NPROC
if: matrix.check-rebootstrap
# clean rebuild in case of Makefile changes
make -C build update-stage0 && rm -rf build/stage* && make -C build -j$NPROC
if: matrix.name == 'Linux' && inputs.check-level >= 1
- name: CCache stats
if: always()
run: ccache -s
@@ -274,3 +235,16 @@ jobs:
progbin="$(file $c | sed "s/.*execfn: '\([^']*\)'.*/\1/")"
echo bt | $GDB/bin/gdb -q $progbin $c || true
done
- name: Save Cache
if: always() && steps.restore-cache.outputs.cache-hit != 'true'
uses: actions/cache/save@v4
with:
# NOTE: must be in sync with `restore` above
path: |
.ccache
${{ matrix.name == 'Linux Lake' && 'build/stage1/**/*.trace
build/stage1/**/*.olean
build/stage1/**/*.ilean
build/stage1/**/*.c
build/stage1/**/*.c.o*' || '' }}
key: ${{ steps.restore-cache.outputs.cache-primary-key }}

View File

@@ -7,7 +7,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
# the default is to use a virtual merge commit between the PR and master: just use the PR
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -8,11 +8,11 @@ jobs:
check-stage0-on-queue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
filter: blob:none
fetch-depth: 0
filter: tree:0
- name: Find base commit
if: github.event_name == 'pull_request'
@@ -31,7 +31,7 @@ jobs:
- if: github.event_name == 'pull_request'
name: Set label
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
const { owner, repo, number: issue_number } = context.issue;

View File

@@ -1,57 +0,0 @@
name: Check stdlib_flags.h modifications
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
check-stdlib-flags:
runs-on: ubuntu-latest
steps:
- name: Check if stdlib_flags.h was modified
uses: actions/github-script@v8
with:
script: |
// Get the list of files changed in this PR
const files = await github.paginate(
github.rest.pulls.listFiles,
{
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.payload.pull_request.number,
}
);
// Check if stdlib_flags.h was modified
const stdlibFlagsModified = files.some(file =>
file.filename === 'src/stdlib_flags.h'
);
if (stdlibFlagsModified) {
console.log('src/stdlib_flags.h was modified in this PR');
// Check if the unlock label is present
const { data: pr } = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number,
});
const hasUnlockLabel = pr.labels.some(label =>
label.name === 'unlock-upstream-stdlib-flags'
);
if (!hasUnlockLabel) {
core.setFailed(
'src/stdlib_flags.h was modified. This is likely a mistake. If you would like to change ' +
'bootstrapping settings or request a stage0 update, you should modify stage0/src/stdlib_flags.h. ' +
'If you really want to change src/stdlib_flags.h (which should be extremely rare), set the ' +
'unlock-upstream-stdlib-flags label.'
);
} else {
console.log('Found unlock-upstream-stdlib-flags');
}
} else {
console.log('src/stdlib_flags.h was not modified');
}

View File

@@ -31,6 +31,10 @@ jobs:
configure:
runs-on: ubuntu-latest
outputs:
# 0: PRs without special label
# 1: PRs with `merge-ci` label, merge queue checks, master commits
# 2: PRs with `release-ci` label, releases (incl. nightlies)
check-level: ${{ steps.set-level.outputs.check-level }}
# The build matrix, dynamically generated here
matrix: ${{ steps.set-matrix.outputs.matrix }}
# secondary build jobs that should not block the CI success/merge queue
@@ -50,9 +54,9 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v4
# don't schedule nightlies on forks
if: github.event_name == 'schedule' && github.repository == 'leanprover/lean4' || inputs.action == 'release nightly' || (startsWith(github.ref, 'refs/tags/') && github.repository == 'leanprover/lean4')
if: github.event_name == 'schedule' && github.repository == 'leanprover/lean4' || inputs.action == 'release nightly'
- name: Set Nightly
if: github.event_name == 'schedule' && github.repository == 'leanprover/lean4' || inputs.action == 'release nightly'
id: set-nightly
@@ -99,61 +103,6 @@ jobs:
echo "Tag ${TAG_NAME} did not match SemVer regex."
fi
- name: Check for custom releases (e.g., not in the main lean repository)
if: startsWith(github.ref, 'refs/tags/') && github.repository != 'leanprover/lean4'
id: set-release-custom
run: |
TAG_NAME="${GITHUB_REF##*/}"
echo "RELEASE_TAG=$TAG_NAME" >> "$GITHUB_OUTPUT"
- name: Validate CMakeLists.txt version matches tag
if: steps.set-release.outputs.RELEASE_TAG != ''
run: |
echo "Validating CMakeLists.txt version matches tag ${{ steps.set-release.outputs.RELEASE_TAG }}"
# Extract version values from CMakeLists.txt
CMAKE_MAJOR=$(grep -E "^set\(LEAN_VERSION_MAJOR " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_MINOR=$(grep -E "^set\(LEAN_VERSION_MINOR " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_PATCH=$(grep -E "^set\(LEAN_VERSION_PATCH " src/CMakeLists.txt | grep -oE '[0-9]+')
CMAKE_IS_RELEASE=$(grep -m 1 -E "^set\(LEAN_VERSION_IS_RELEASE " src/CMakeLists.txt | grep -oE '[0-9]+')
# Expected values from tag parsing
TAG_MAJOR="${{ steps.set-release.outputs.LEAN_VERSION_MAJOR }}"
TAG_MINOR="${{ steps.set-release.outputs.LEAN_VERSION_MINOR }}"
TAG_PATCH="${{ steps.set-release.outputs.LEAN_VERSION_PATCH }}"
ERRORS=""
if [[ "$CMAKE_MAJOR" != "$TAG_MAJOR" ]]; then
ERRORS+="LEAN_VERSION_MAJOR: expected $TAG_MAJOR, found $CMAKE_MAJOR\n"
fi
if [[ "$CMAKE_MINOR" != "$TAG_MINOR" ]]; then
ERRORS+="LEAN_VERSION_MINOR: expected $TAG_MINOR, found $CMAKE_MINOR\n"
fi
if [[ "$CMAKE_PATCH" != "$TAG_PATCH" ]]; then
ERRORS+="LEAN_VERSION_PATCH: expected $TAG_PATCH, found $CMAKE_PATCH\n"
fi
if [[ "$CMAKE_IS_RELEASE" != "1" ]]; then
ERRORS+="LEAN_VERSION_IS_RELEASE: expected 1, found $CMAKE_IS_RELEASE\n"
fi
if [[ -n "$ERRORS" ]]; then
echo "::error::Version mismatch between tag and src/CMakeLists.txt"
echo ""
echo "Tag ${{ steps.set-release.outputs.RELEASE_TAG }} expects version $TAG_MAJOR.$TAG_MINOR.$TAG_PATCH"
echo "But src/CMakeLists.txt has mismatched values:"
echo -e "$ERRORS"
echo ""
echo "Fix src/CMakeLists.txt, delete the tag, and re-tag."
exit 1
fi
echo "Version validation passed: $TAG_MAJOR.$TAG_MINOR.$TAG_PATCH"
# 0: PRs without special label
# 1: PRs with `merge-ci` label, merge queue checks, master commits
# 2: nightlies
# 3: PRs with `release-ci` label, full releases
- name: Set check level
id: set-level
# We do not use github.event.pull_request.labels.*.name here because
@@ -161,51 +110,41 @@ jobs:
# rerun the workflow run after setting the `release-ci`/`merge-ci` labels.
run: |
check_level=0
fast=false
if [[ -n "${{ steps.set-release.outputs.RELEASE_TAG }}" || -n "${{ steps.set-release-custom.outputs.RELEASE_TAG }}" ]]; then
check_level=3
elif [[ -n "${{ steps.set-nightly.outputs.nightly }}" ]]; then
if [[ -n "${{ steps.set-nightly.outputs.nightly }}" || -n "${{ steps.set-release.outputs.RELEASE_TAG }}" ]]; then
check_level=2
elif [[ "${{ github.event_name }}" != "pull_request" ]]; then
check_level=1
else
labels="$(gh api repos/${{ github.repository_owner }}/${{ github.event.repository.name }}/pulls/${{ github.event.pull_request.number }} --jq '.labels')"
if echo "$labels" | grep -q "release-ci"; then
check_level=3
check_level=2
elif echo "$labels" | grep -q "merge-ci"; then
check_level=1
fi
if echo "$labels" | grep -q "fast-ci"; then
fast=true
fi
fi
echo "check-level=$check_level" >> "$GITHUB_OUTPUT"
echo "fast=$fast" >> "$GITHUB_OUTPUT"
env:
GH_TOKEN: ${{ github.token }}
- name: Configure build matrix
id: set-matrix
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
const level = ${{ steps.set-level.outputs.check-level }};
const fast = ${{ steps.set-level.outputs.fast }};
console.log(`level: ${level}, fast: ${fast}`);
console.log(`level: ${level}`);
// use large runners where available (original repo)
let large = ${{ github.repository == 'leanprover/lean4' }};
const isPr = "${{ github.event_name }}" == "pull_request";
const isPushToMaster = "${{ github.event_name }}" == "push" && "${{ github.ref_name }}" == "master";
let matrix = [
/* TODO: to be updated to new LLVM
{
"name": "Linux LLVM",
"os": "ubuntu-latest",
"release": false,
"enabled": level >= 2,
"test": true,
"check-level": 2,
"shell": "nix develop .#oldGlibc -c bash -euxo pipefail {0}",
"llvm-url": "https://github.com/leanprover/lean-llvm/releases/download/19.1.2/lean-llvm-x86_64-linux-gnu.tar.zst",
"prepare-llvm": "../script/prepare-llvm-linux.sh lean-llvm*",
@@ -218,81 +157,66 @@ jobs:
{
// portable release build: use channel with older glibc (2.26)
"name": "Linux release",
// usually not a bottleneck so make exclusive to `fast-ci`
"os": large && fast ? "nscloud-ubuntu-22.04-amd64-8x16-with-cache" : "ubuntu-latest",
"os": large ? "nscloud-ubuntu-22.04-amd64-4x8" : "ubuntu-latest",
"release": true,
// Special handling for release jobs. We want:
// 1. To run it in PRs so developers get PR toolchains (so secondary without tests is sufficient)
// 2. To skip it in merge queues as it takes longer than the
// Linux lake build and adds little value in the merge queue
// 3. To run it in release (obviously)
// 4. To run it for pushes to master so that pushes to master have a Linux toolchain
// available as an artifact for Grove to use.
"enabled": isPr || level != 1 || isPushToMaster,
"test": level >= 1,
"secondary": level == 0,
"check-level": 0,
"shell": "nix develop .#oldGlibc -c bash -euxo pipefail {0}",
"llvm-url": "https://github.com/leanprover/lean-llvm/releases/download/19.1.2/lean-llvm-x86_64-linux-gnu.tar.zst",
"prepare-llvm": "../script/prepare-llvm-linux.sh lean-llvm*",
"binary-check": "ldd -v",
// foreign code may be linked against more recent glibc
"CTEST_OPTIONS": "-E 'foreign'",
"CTEST_OPTIONS": "-E 'foreign'"
},
{
"name": "Linux Lake",
"os": large ? "nscloud-ubuntu-22.04-amd64-8x16-with-cache" : "ubuntu-latest",
"enabled": true,
"check-rebootstrap": level >= 1,
"os": large ? "nscloud-ubuntu-22.04-amd64-4x8" : "ubuntu-latest",
"check-level": 0,
// just a secondary build job for now until false positives can be excluded
"secondary": true,
"CMAKE_OPTIONS": "-DUSE_LAKE=ON",
// TODO: importStructure is not compatible with .olean caching
// TODO: why does scopedMacros fail?
"CTEST_OPTIONS": "-E 'scopedMacros|importStructure'"
},
{
"name": "Linux",
"os": large ? "nscloud-ubuntu-22.04-amd64-4x8" : "ubuntu-latest",
"check-stage3": level >= 2,
"test": true,
// NOTE: `test-speedcenter` currently seems to be broken on `ubuntu-latest`
"test-speedcenter": large && level >= 2,
// We are not warning-free yet on all platforms, start here
"CMAKE_OPTIONS": "-DLEAN_EXTRA_CXX_FLAGS=-Werror",
"test-speedcenter": level >= 2,
"check-level": 1,
},
{
"name": "Linux Reldebug",
"os": "ubuntu-latest",
"enabled": level >= 2,
"test": true,
"check-level": 2,
"CMAKE_PRESET": "reldebug",
// exclude seriously slow/stackoverflowing tests
"CTEST_OPTIONS": "-E 'interactivetest|leanpkgtest|laketest|benchtest|bv_bitblast_stress|3807'"
},
{
// TODO: suddenly started failing in CI
/*{
"name": "Linux fsanitize",
// Always run on large if available, more reliable regarding timeouts
"os": large ? "nscloud-ubuntu-22.04-amd64-8x16-with-cache" : "ubuntu-latest",
"enabled": level >= 2,
// do not fail nightlies on this for now
"secondary": level <= 2,
"test": true,
"os": "ubuntu-latest",
"check-level": 2,
// turn off custom allocator & symbolic functions to make LSAN do its magic
"CMAKE_PRESET": "sanitize",
// `StackOverflow*` correctly triggers ubsan.
// `reverse-ffi` fails to link in sanitizers.
// `interactive` and `async_select_channel` fail nondeterministically, would need to
// be investigated..
// 9366 is too close to timeout.
// `bv_` sometimes times out calling into cadical even though we should be using the
// standard compile flags for it.
"CTEST_OPTIONS": "-E 'StackOverflow|reverse-ffi|interactive|async_select_channel|9366|run/bv_'"
},
// exclude seriously slow/problematic tests (laketests crash)
"CTEST_OPTIONS": "-E 'interactivetest|leanpkgtest|laketest|benchtest'"
},*/
{
"name": "macOS",
"os": "macos-15-intel",
"os": "macos-13",
"release": true,
"test": false, // Tier 2 platform
"enabled": level >= 2,
"check-level": 2,
"shell": "bash -euxo pipefail {0}",
"llvm-url": "https://github.com/leanprover/lean-llvm/releases/download/19.1.2/lean-llvm-x86_64-apple-darwin.tar.zst",
"prepare-llvm": "../script/prepare-llvm-macos.sh lean-llvm*",
"binary-check": "otool -L",
"tar": "gtar", // https://github.com/actions/runner-images/issues/2619
"CTEST_OPTIONS": "-E 'leanlaketest_hello'", // started failing from unpack
"tar": "gtar" // https://github.com/actions/runner-images/issues/2619
},
{
"name": "macOS aarch64",
// standard GH runner only comes with 7GB so use large runner if possible when running tests
"os": large && (fast || level >= 1) ? "nscloud-macos-sequoia-arm64-6x14" : "macos-15",
"os": "macos-14",
"CMAKE_OPTIONS": "-DLEAN_INSTALL_SUFFIX=-darwin_aarch64",
"release": true,
"shell": "bash -euxo pipefail {0}",
@@ -300,33 +224,36 @@ jobs:
"prepare-llvm": "../script/prepare-llvm-macos.sh lean-llvm*",
"binary-check": "otool -L",
"tar": "gtar", // https://github.com/actions/runner-images/issues/2619
// See "Linux release" for release job levels; Grove is not a concern here
"enabled": isPr || level != 1,
"test": level >= 1,
"secondary": level == 0,
// Special handling for MacOS aarch64, we want:
// 1. To run it in PRs so Mac devs get PR toolchains (so secondary is sufficient)
// 2. To skip it in merge queues as it takes longer than the Linux build and adds
// little value in the merge queue
// 3. To run it in release (obviously)
"check-level": isPr ? 0 : 2,
"secondary": isPr,
},
{
"name": "Windows",
"os": large && (fast || level >= 2) ? "namespace-profile-windows-amd64-4x16" : "windows-2022",
"os": "windows-2022",
"release": true,
"enabled": level >= 2,
"test": true,
"check-level": 2,
"shell": "msys2 {0}",
"CMAKE_OPTIONS": "-G \"Unix Makefiles\"",
// for reasons unknown, interactivetests are flaky on Windows
"CTEST_OPTIONS": "--repeat until-pass:2",
"llvm-url": "https://github.com/leanprover/lean-llvm/releases/download/19.1.2/lean-llvm-x86_64-w64-windows-gnu.tar.zst",
"prepare-llvm": "../script/prepare-llvm-mingw.sh lean-llvm*",
"binary-check": "ldd",
"binary-check": "ldd"
},
{
"name": "Linux aarch64",
"os": "nscloud-ubuntu-22.04-arm64-4x16",
"os": "nscloud-ubuntu-22.04-arm64-4x8",
"CMAKE_OPTIONS": "-DLEAN_INSTALL_SUFFIX=-linux_aarch64",
"release": true,
"enabled": level >= 2,
"test": true,
"check-level": 2,
"shell": "nix develop .#oldGlibcAArch -c bash -euxo pipefail {0}",
"llvm-url": "https://github.com/leanprover/lean-llvm/releases/download/19.1.2/lean-llvm-aarch64-linux-gnu.tar.zst",
"prepare-llvm": "../script/prepare-llvm-linux.sh lean-llvm*",
"prepare-llvm": "../script/prepare-llvm-linux.sh lean-llvm*"
},
// Started running out of memory building expensive modules, a 2GB heap is just not that much even before fragmentation
//{
@@ -336,7 +263,7 @@ jobs:
// "CMAKE_OPTIONS": "-DSTAGE0_USE_GMP=OFF -DSTAGE0_LEAN_EXTRA_CXX_FLAGS='-m32' -DSTAGE0_LEANC_OPTS='-m32' -DSTAGE0_MMAP=OFF -DUSE_GMP=OFF -DLEAN_EXTRA_CXX_FLAGS='-m32' -DLEANC_OPTS='-m32' -DMMAP=OFF -DLEAN_INSTALL_SUFFIX=-linux_x86 -DCMAKE_LIBRARY_PATH=/usr/lib/i386-linux-gnu/ -DSTAGE0_CMAKE_LIBRARY_PATH=/usr/lib/i386-linux-gnu/ -DPKG_CONFIG_EXECUTABLE=/usr/bin/i386-linux-gnu-pkg-config",
// "cmultilib": true,
// "release": true,
// "enabled": level >= 2,
// "check-level": 2,
// "cross": true,
// "shell": "bash -euxo pipefail {0}"
//}
@@ -348,21 +275,15 @@ jobs:
// "wasm": true,
// "cmultilib": true,
// "release": true,
// "enabled": level >= 2,
// "check-level": 2,
// "cross": true,
// "shell": "bash -euxo pipefail {0}",
// // Just a few selected tests because wasm is slow
// "CTEST_OPTIONS": "-R \"leantest_1007\\.lean|leantest_Format\\.lean|leanruntest\\_1037.lean|leanruntest_ac_rfl\\.lean|leanruntest_tempfile.lean\\.|leanruntest_libuv\\.lean\""
// }
];
for (const job of matrix) {
if (job["prepare-llvm"]) {
// `USE_LAKE` is not compatible with `prepare-llvm` currently
job["CMAKE_OPTIONS"] = (job["CMAKE_OPTIONS"] ? job["CMAKE_OPTIONS"] + " " : "") + "-DUSE_LAKE=OFF";
}
}
console.log(`matrix:\n${JSON.stringify(matrix, null, 2)}`);
matrix = matrix.filter((job) => job["enabled"]);
matrix = matrix.filter((job) => level >= job["check-level"]);
core.setOutput('matrix', matrix.filter((job) => !job["secondary"]));
core.setOutput('matrix-secondary', matrix.filter((job) => job["secondary"]));
@@ -372,6 +293,7 @@ jobs:
uses: ./.github/workflows/build-template.yml
with:
config: ${{needs.configure.outputs.matrix}}
check-level: ${{ needs.configure.outputs.check-level }}
nightly: ${{ needs.configure.outputs.nightly }}
LEAN_VERSION_MAJOR: ${{ needs.configure.outputs.LEAN_VERSION_MAJOR }}
LEAN_VERSION_MINOR: ${{ needs.configure.outputs.LEAN_VERSION_MINOR }}
@@ -387,6 +309,7 @@ jobs:
uses: ./.github/workflows/build-template.yml
with:
config: ${{needs.configure.outputs.matrix-secondary}}
check-level: ${{ needs.configure.outputs.check-level }}
nightly: ${{ needs.configure.outputs.nightly }}
LEAN_VERSION_MAJOR: ${{ needs.configure.outputs.LEAN_VERSION_MAJOR }}
LEAN_VERSION_MINOR: ${{ needs.configure.outputs.LEAN_VERSION_MINOR }}
@@ -417,7 +340,7 @@ jobs:
content: |
A build of `${{ github.ref_name }}`, triggered by event `${{ github.event_name }}`, [failed](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}).
- if: contains(needs.*.result, 'failure')
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
core.setFailed('Some jobs failed')
@@ -430,11 +353,11 @@ jobs:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/download-artifact@v6
- uses: actions/download-artifact@v4
with:
path: artifacts
- name: Release
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
uses: softprops/action-gh-release@v2
with:
files: artifacts/*/*
fail_on_unmatched_files: true
@@ -455,14 +378,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
# needed for tagging
fetch-depth: 0
# Doesn't seem to be working when additionally fetching from lean4-nightly
#filter: tree:0
token: ${{ secrets.PUSH_NIGHTLY_TOKEN }}
- uses: actions/download-artifact@v6
- uses: actions/download-artifact@v4
with:
path: artifacts
- name: Prepare Nightly Release
@@ -480,7 +401,7 @@ jobs:
echo -e "\n*Full commit log*\n" >> diff.md
git log --oneline "$last_tag"..HEAD | sed 's/^/* /' >> diff.md
- name: Release Nightly
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
uses: softprops/action-gh-release@v2
with:
body_path: diff.md
prerelease: true
@@ -497,6 +418,6 @@ jobs:
GITHUB_TOKEN: ${{ secrets.RELEASE_INDEX_TOKEN }}
- name: Update toolchain on mathlib4's nightly-testing branch
run: |
gh workflow -R leanprover-community/mathlib4-nightly-testing run nightly_bump_toolchain.yml
gh workflow -R leanprover-community/mathlib4 run nightly_bump_toolchain.yml
env:
GITHUB_TOKEN: ${{ secrets.MATHLIB4_BOT }}

View File

@@ -6,7 +6,7 @@ jobs:
check-lean-files:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: Verify .lean files start with a copyright header.
run: |

View File

@@ -1,162 +0,0 @@
name: Grove
on:
workflow_run: # https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run
workflows: [CI]
types: [completed]
permissions:
pull-requests: write
jobs:
grove-build:
runs-on: ubuntu-latest
if: github.event.workflow_run.conclusion == 'success' && github.repository == 'leanprover/lean4'
steps:
- name: Retrieve information about the original workflow
uses: potiuk/get-workflow-origin@v1_1 # https://github.com/marketplace/actions/get-workflow-origin
# This action is deprecated and archived, but it seems hard to find a
# better solution for getting the PR number
# see https://github.com/orgs/community/discussions/25220 for some discussion
id: workflow-info
with:
token: ${{ secrets.GITHUB_TOKEN }}
sourceRunId: ${{ github.event.workflow_run.id }}
- name: Check if should run
id: should-run
run: |
# Check if it's a push to master (no PR number and target branch is master)
if [ -z "${{ steps.workflow-info.outputs.pullRequestNumber }}" ]; then
if [ "${{ github.event.workflow_run.head_branch }}" = "master" ]; then
echo "Push to master detected. Running Grove."
echo "should-run=true" >> "$GITHUB_OUTPUT"
else
echo "Push to non-master branch, skipping"
echo "should-run=false" >> "$GITHUB_OUTPUT"
fi
else
# Check if it's a PR with grove label
PR_LABELS='${{ steps.workflow-info.outputs.pullRequestLabels }}'
if echo "$PR_LABELS" | grep -q '"grove"'; then
echo "PR with grove label detected. Running Grove."
echo "should-run=true" >> "$GITHUB_OUTPUT"
else
echo "PR without grove label, skipping"
echo "should-run=false" >> "$GITHUB_OUTPUT"
fi
fi
- name: Fetch upstream invalidated facts
if: ${{ steps.should-run.outputs.should-run == 'true' && steps.workflow-info.outputs.pullRequestNumber != '' }}
id: fetch-upstream
uses: TwoFx/grove-action/fetch-upstream@v0.5
with:
artifact-name: grove-invalidated-facts
base-ref: master
- name: Download toolchain for this commit
if: ${{ steps.should-run.outputs.should-run == 'true' }}
id: download-toolchain
uses: dawidd6/action-download-artifact@v11
with:
commit: ${{ steps.workflow-info.outputs.sourceHeadSha }}
workflow: ci.yml
path: artifacts
name: "build-Linux release"
allow_forks: true
name_is_regexp: true
- name: Unpack toolchain
if: ${{ steps.should-run.outputs.should-run == 'true' }}
id: unpack-toolchain
run: |
cd artifacts
# Find the tar.zst file
TAR_FILE=$(find . -name "lean-*.tar.zst" -type f | head -1)
if [ -z "$TAR_FILE" ]; then
echo "Error: No lean-*.tar.zst file found"
exit 1
fi
echo "Found archive: $TAR_FILE"
# Extract the archive
tar --zstd -xf "$TAR_FILE"
# Find the extracted directory name
LEAN_DIR=$(find . -maxdepth 1 -name "lean-*" -type d | head -1)
if [ -z "$LEAN_DIR" ]; then
echo "Error: No lean-* directory found after extraction"
exit 1
fi
echo "Extracted directory: $LEAN_DIR"
echo "lean-dir=$LEAN_DIR" >> "$GITHUB_OUTPUT"
- name: Build
if: ${{ steps.should-run.outputs.should-run == 'true' }}
id: build
uses: TwoFx/grove-action/build@v0.5
with:
project-path: doc/std/grove
script-name: grove-stdlib
invalidated-facts-artifact-name: grove-invalidated-facts
comment-artifact-name: grove-comment
toolchain-id: lean4
toolchain-path: artifacts/${{ steps.unpack-toolchain.outputs.lean-dir }}
project-ref: ${{ steps.workflow-info.outputs.sourceHeadSha }}
# deploy-alias computes a URL component for the PR preview. This
# is so we can have a stable name to use for feedback on draft
# material.
- id: deploy-alias
if: ${{ steps.should-run.outputs.should-run == 'true' }}
uses: actions/github-script@v8
name: Compute Alias
with:
result-encoding: string
script: |
if (process.env.PR) {
return `pr-${process.env.PR}`
} else {
return 'deploy-preview-main';
}
env:
PR: ${{ steps.workflow-info.outputs.pullRequestNumber }}
- name: Deploy to Netlify
if: ${{ steps.should-run.outputs.should-run == 'true' }}
id: deploy-draft
uses: nwtgck/actions-netlify@v3.0
with:
publish-dir: ${{ steps.build.outputs.out-path }}
production-deploy: false
github-token: ${{ secrets.GITHUB_TOKEN }}
alias: ${{ steps.deploy-alias.outputs.result }}
enable-commit-comment: false
enable-pull-request-comment: false
fails-without-credentials: true
enable-github-deployment: false
enable-commit-status: false
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: "1cacfa39-a11c-467c-99e7-2e01d7b4089e"
# actions-netlify cannot add deploy links to a PR because it assumes a
# pull_request context, not a workflow_run context, see
# https://github.com/nwtgck/actions-netlify/issues/545
# We work around by using a comment to post the latest link
- name: "Comment on PR with preview links"
uses: marocchino/sticky-pull-request-comment@v2
if: ${{ steps.should-run.outputs.should-run == 'true' && steps.workflow-info.outputs.pullRequestNumber != '' }}
with:
number: ${{ env.PR_NUMBER }}
header: preview-comment
recreate: true
message: |
[Grove](${{ steps.deploy-draft.outputs.deploy-url }}) for revision ${{ steps.workflow-info.outputs.sourceHeadSha }}.
${{ steps.build.outputs.comment-text }}
env:
PR_NUMBER: ${{ steps.workflow-info.outputs.pullRequestNumber }}
PR_HEADSHA: ${{ steps.workflow-info.outputs.sourceHeadSha }}

View File

@@ -17,7 +17,7 @@ jobs:
steps:
- name: Add label based on comment
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |

View File

@@ -11,7 +11,7 @@ jobs:
steps:
- name: Check PR body
if: github.event_name == 'pull_request'
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
const { title, body, labels, draft } = context.payload.pull_request;

View File

@@ -34,7 +34,7 @@ jobs:
- name: Download artifact from the previous workflow.
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
id: download-artifact
uses: dawidd6/action-download-artifact@v11 # https://github.com/marketplace/actions/download-workflow-artifact
uses: dawidd6/action-download-artifact@v9 # https://github.com/marketplace/actions/download-workflow-artifact
with:
run_id: ${{ github.event.workflow_run.id }}
path: artifacts
@@ -48,30 +48,19 @@ jobs:
git -C lean4.git remote add origin https://github.com/${{ github.repository_owner }}/lean4.git
git -C lean4.git fetch -n origin master
git -C lean4.git fetch -n origin "${{ steps.workflow-info.outputs.sourceHeadSha }}"
# Create both the original tag and the SHA-suffixed tag
SHORT_SHA="${{ steps.workflow-info.outputs.sourceHeadSha }}"
SHORT_SHA="${SHORT_SHA:0:7}"
# Export the short SHA for use in subsequent steps
echo "SHORT_SHA=${SHORT_SHA}" >> "$GITHUB_ENV"
git -C lean4.git tag -f pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }} "${{ steps.workflow-info.outputs.sourceHeadSha }}"
git -C lean4.git tag -f pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-"${SHORT_SHA}" "${{ steps.workflow-info.outputs.sourceHeadSha }}"
git -C lean4.git remote add pr-releases https://foo:'${{ secrets.PR_RELEASES_TOKEN }}'@github.com/${{ github.repository_owner }}/lean4-pr-releases.git
git -C lean4.git push -f pr-releases pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}
git -C lean4.git push -f pr-releases pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-"${SHORT_SHA}"
- name: Delete existing release if present
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
run: |
# Try to delete any existing release for the current PR (just the version without the SHA suffix).
# Try to delete any existing release for the current PR.
gh release delete --repo ${{ github.repository_owner }}/lean4-pr-releases pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }} -y || true
env:
GH_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Release (short format)
- name: Release
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
uses: softprops/action-gh-release@v2
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }}
# There are coredumps files here as well, but all in deeper subdirectories.
@@ -84,24 +73,9 @@ jobs:
# The token used here must have `workflow` privileges.
GITHUB_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Release (SHA-suffixed format)
- name: Report release status
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: softprops/action-gh-release@6da8fa9354ddfdc4aeace5fc48d7f679b5214090
with:
name: Release for PR ${{ steps.workflow-info.outputs.pullRequestNumber }} (${{ steps.workflow-info.outputs.sourceHeadSha }})
# There are coredumps files here as well, but all in deeper subdirectories.
files: artifacts/*/*
fail_on_unmatched_files: true
draft: false
tag_name: pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}
repository: ${{ github.repository_owner }}/lean4-pr-releases
env:
# The token used here must have `workflow` privileges.
GITHUB_TOKEN: ${{ secrets.PR_RELEASES_TOKEN }}
- name: Report release status (short format)
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
await github.rest.repos.createCommitStatus({
@@ -113,23 +87,9 @@ jobs:
description: "${{ github.repository_owner }}/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}",
});
- name: Report release status (SHA-suffixed format)
- name: Add label
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: actions/github-script@v8
with:
script: |
await github.rest.repos.createCommitStatus({
owner: context.repo.owner,
repo: context.repo.repo,
sha: "${{ steps.workflow-info.outputs.sourceHeadSha }}",
state: "success",
context: "PR toolchain (SHA-suffixed)",
description: "${{ github.repository_owner }}/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}",
});
- name: Add toolchain-available label
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
await github.rest.issues.addLabels({
@@ -151,10 +111,10 @@ jobs:
- name: 'Setup jq'
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
uses: dcarbone/install-jq-action@v3.2.0
uses: dcarbone/install-jq-action@v3.1.1
# Check that the most recently nightly coincides with 'git merge-base HEAD master'
- name: Check merge-base and nightly-testing-YYYY-MM-DD for Mathlib/Batteries
- name: Check merge-base and nightly-testing-YYYY-MM-DD
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
id: ready
run: |
@@ -166,15 +126,24 @@ jobs:
if [ "$NIGHTLY_SHA" = "$MERGE_BASE_SHA" ]; then
echo "The merge base of this PR coincides with the nightly release"
MATHLIB_REMOTE_TAGS="$(git ls-remote https://github.com/leanprover-community/mathlib4-nightly-testing.git nightly-testing-"$MOST_RECENT_NIGHTLY")"
BATTERIES_REMOTE_TAGS="$(git ls-remote https://github.com/leanprover-community/batteries.git nightly-testing-"$MOST_RECENT_NIGHTLY")"
MATHLIB_REMOTE_TAGS="$(git ls-remote https://github.com/leanprover-community/mathlib4.git nightly-testing-"$MOST_RECENT_NIGHTLY")"
if [[ -n "$MATHLIB_REMOTE_TAGS" ]]; then
echo "... and Mathlib has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
if [[ -n "$BATTERIES_REMOTE_TAGS" ]]; then
echo "... and Batteries has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE=""
if [[ -n "$MATHLIB_REMOTE_TAGS" ]]; then
echo "... and Mathlib has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
else
echo "... but Mathlib does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Mathlib CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-mathlib\`, Mathlib CI should run now."
fi
else
echo "... but Mathlib does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Mathlib CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-mathlib\`, Mathlib CI should run now."
echo "... but Batteries does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Batteries CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-mathlib\`, Batteries CI should run now."
fi
else
echo "The most recently nightly tag on this branch has SHA: $NIGHTLY_SHA"
echo "but 'git merge-base origin/master HEAD' reported: $MERGE_BASE_SHA"
@@ -192,7 +161,7 @@ jobs:
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/leanprover/lean4/issues/${{ steps.workflow-info.outputs.pullRequestNumber }}/labels" \
| jq -r '.[].name')"
if echo "$LABELS" | grep -q "^force-mathlib-ci$"; then
echo "force-mathlib-ci label detected, forcing CI despite issues"
MESSAGE="Forcing Mathlib CI because the \`force-mathlib-ci\` label is present, despite problem: $MESSAGE"
@@ -256,111 +225,9 @@ jobs:
echo "mathlib_ready=true" >> "$GITHUB_OUTPUT"
fi
- name: Check merge-base and nightly-testing-YYYY-MM-DD for reference manual
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' }}
id: reference-manual-ready
run: |
echo "Most recent nightly release in your branch: $MOST_RECENT_NIGHTLY"
NIGHTLY_SHA=$(git -C lean4.git rev-parse "nightly-$MOST_RECENT_NIGHTLY^{commit}")
echo "SHA of most recent nightly release: $NIGHTLY_SHA"
MERGE_BASE_SHA=$(git -C lean4.git merge-base origin/master "${{ steps.workflow-info.outputs.sourceHeadSha }}")
echo "SHA of merge-base: $MERGE_BASE_SHA"
if [ "$NIGHTLY_SHA" = "$MERGE_BASE_SHA" ]; then
echo "The merge base of this PR coincides with the nightly release"
MANUAL_REMOTE_TAGS="$(git ls-remote https://github.com/leanprover/reference-manual.git nightly-testing-"$MOST_RECENT_NIGHTLY")"
if [[ -n "$MANUAL_REMOTE_TAGS" ]]; then
echo "... and the reference manual has a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE=""
else
echo "... but the reference manual does not yet have a 'nightly-testing-$MOST_RECENT_NIGHTLY' tag."
MESSAGE="- ❗ Reference manual CI can not be attempted yet, as the \`nightly-testing-$MOST_RECENT_NIGHTLY\` tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto \`nightly-with-manual\`, reference manual CI should run now."
fi
else
echo "The most recently nightly tag on this branch has SHA: $NIGHTLY_SHA"
echo "but 'git merge-base origin/master HEAD' reported: $MERGE_BASE_SHA"
git -C lean4.git log -10 origin/master
git -C lean4.git fetch origin nightly-with-manual
NIGHTLY_WITH_MANUAL_SHA="$(git -C lean4.git rev-parse "origin/nightly-with-manual")"
MESSAGE="- ❗ Reference manual CI will not be attempted unless your PR branches off the \`nightly-with-manual\` branch. Try \`git rebase $MERGE_BASE_SHA --onto $NIGHTLY_WITH_MANUAL_SHA\`."
fi
if [[ -n "$MESSAGE" ]]; then
# Check if force-manual-ci label is present
LABELS="$(curl --retry 3 --location --silent \
-H "Authorization: token ${{ secrets.MANUAL_COMMENT_BOT }}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/leanprover/lean4/issues/${{ steps.workflow-info.outputs.pullRequestNumber }}/labels" \
| jq -r '.[].name')"
if echo "$LABELS" | grep -q "^force-manual-ci$"; then
echo "force-manual-ci label detected, forcing CI despite issues"
MESSAGE="Forcing reference manual CI because the \`force-manual-ci\` label is present, despite problem: $MESSAGE"
FORCE_CI=true
else
MESSAGE="$MESSAGE You can force reference manual CI using the \`force-manual-ci\` label."
fi
echo "Checking existing messages"
# The code for updating comments is duplicated in the reference manual's
# scripts/lean-pr-testing-comments.sh
# so keep in sync
# Use GitHub API to check if a comment already exists
existing_comment="$(curl --retry 3 --location --silent \
-H "Authorization: token ${{ secrets.MANUAL_COMMENT_BOT }}" \
-H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/leanprover/lean4/issues/${{ steps.workflow-info.outputs.pullRequestNumber }}/comments" \
| jq 'first(.[] | select(.body | test("^- . Manual") or startswith("Reference manual CI status")) | select(.user.login == "leanprover-bot"))')"
existing_comment_id="$(echo "$existing_comment" | jq -r .id)"
existing_comment_body="$(echo "$existing_comment" | jq -r .body)"
if [[ "$existing_comment_body" != *"$MESSAGE"* ]]; then
MESSAGE="$MESSAGE ($(date "+%Y-%m-%d %H:%M:%S"))"
echo "Posting message to the comments: $MESSAGE"
# Append new result to the existing comment or post a new comment
# It's essential we use the MANUAL_COMMENT_BOT token here, so that reference manual CI can subsequently edit the comment.
if [ -z "$existing_comment_id" ]; then
INTRO="Reference manual CI status:"
# Post new comment with a bullet point
echo "Posting as new comment at leanprover/lean4/issues/${{ steps.workflow-info.outputs.pullRequestNumber }}/comments"
curl -L -s \
-X POST \
-H "Authorization: token ${{ secrets.MANUAL_COMMENT_BOT }}" \
-H "Accept: application/vnd.github.v3+json" \
-d "$(jq --null-input --arg intro "$INTRO" --arg val "$MESSAGE" '{"body":($intro + "\n" + $val)}')" \
"https://api.github.com/repos/leanprover/lean4/issues/${{ steps.workflow-info.outputs.pullRequestNumber }}/comments"
else
# Append new result to the existing comment
echo "Appending to existing comment at leanprover/lean4/issues/${{ steps.workflow-info.outputs.pullRequestNumber }}/comments"
curl -L -s \
-X PATCH \
-H "Authorization: token ${{ secrets.MANUAL_COMMENT_BOT }}" \
-H "Accept: application/vnd.github.v3+json" \
-d "$(jq --null-input --arg existing "$existing_comment_body" --arg message "$MESSAGE" '{"body":($existing + "\n" + $message)}')" \
"https://api.github.com/repos/leanprover/lean4/issues/comments/$existing_comment_id"
fi
else
echo "The message already exists in the comment body."
fi
if [[ "$FORCE_CI" == "true" ]]; then
echo "manual_ready=true" >> "$GITHUB_OUTPUT"
else
echo "manual_ready=false" >> "$GITHUB_OUTPUT"
fi
else
echo "manual_ready=true" >> "$GITHUB_OUTPUT"
fi
- name: Report mathlib base
if: ${{ steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true' }}
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
const description =
@@ -387,13 +254,12 @@ jobs:
# Checkout the Batteries repository with all branches
- name: Checkout Batteries repository
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
repository: leanprover-community/batteries
token: ${{ secrets.MATHLIB4_BOT }}
ref: nightly-testing
fetch-depth: 0 # This ensures we check out all tags and branches.
filter: tree:0
- name: Check if tag exists
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
@@ -416,18 +282,16 @@ jobs:
if [ "$EXISTS" = "0" ]; then
echo "Branch does not exist, creating it."
git switch -c lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }} "$BASE"
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}" > lean-toolchain
git add lean-toolchain
git commit --allow-empty -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
git commit -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
else
echo "Branch already exists, updating lean-toolchain."
echo "Branch already exists, pushing an empty commit."
git switch lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}
# The Batteries `nightly-testing` or `nightly-testing-YYYY-MM-DD` branch may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
git merge "$BASE" --strategy-option ours --no-commit --allow-unrelated-histories
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
git commit --allow-empty -m "Update lean-toolchain for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
git commit --allow-empty -m "Trigger CI for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
fi
- name: Push changes
@@ -447,13 +311,12 @@ jobs:
# Checkout the mathlib4 repository with all branches
- name: Checkout mathlib4 repository
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
repository: leanprover-community/mathlib4-nightly-testing
repository: leanprover-community/mathlib4
token: ${{ secrets.MATHLIB4_BOT }}
ref: nightly-testing
fetch-depth: 0 # This ensures we check out all tags and branches.
filter: tree:0
- name: install elan
run: |
@@ -483,99 +346,24 @@ jobs:
if [ "$EXISTS" = "0" ]; then
echo "Branch does not exist, creating it."
git switch -c lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }} "$BASE"
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}" > lean-toolchain
git add lean-toolchain
sed -i 's,require "leanprover-community" / "batteries" @ git ".\+",require "leanprover-community" / "batteries" @ git "lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}",' lakefile.lean
lake update batteries
git add lakefile.lean lake-manifest.json
git commit --allow-empty -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
git commit -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
else
echo "Branch already exists, updating lean-toolchain and bumping Batteries."
echo "Branch already exists, merging $BASE and bumping Batteries."
git switch lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}
# The Mathlib `nightly-testing` branch or `nightly-testing-YYYY-MM-DD` tag may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
git merge "$BASE" --strategy-option ours --no-commit --allow-unrelated-histories
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
lake update batteries
git add lake-manifest.json
git commit --allow-empty -m "Update lean-toolchain for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
git commit --allow-empty -m "Trigger CI for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
fi
- name: Push changes
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
run: |
git push origin lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}
- name: Add mathlib4-nightly-available label
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.ready.outputs.mathlib_ready == 'true'
uses: actions/github-script@v8
with:
script: |
await github.rest.issues.addLabels({
issue_number: ${{ steps.workflow-info.outputs.pullRequestNumber }},
owner: context.repo.owner,
repo: context.repo.repo,
labels: ['mathlib4-nightly-available']
})
# We next automatically create a reference manual branch using this toolchain.
# Reference manual CI will be responsible for reporting back success or failure
# to the PR comments asynchronously (and thus transitively SubVerso/Verso).
- name: Cleanup workspace
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.reference-manual-ready.outputs.manual_ready == 'true'
run: |
sudo rm -rf ./*
# Checkout the reference manual repository with all branches
- name: Checkout mathlib4 repository
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.reference-manual-ready.outputs.manual_ready == 'true'
uses: actions/checkout@v5
with:
repository: leanprover/reference-manual
token: ${{ secrets.MANUAL_PR_BOT }}
ref: nightly-testing
fetch-depth: 0 # This ensures we check out all tags and branches.
filter: tree:0
- name: Check if tag in reference manual exists
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.reference-manual-ready.outputs.manual_ready == 'true'
id: check_manual_tag
run: |
git config user.name "leanprover-bot"
git config user.email "leanprover-bot@lean-fro.org"
if git ls-remote --heads --tags --exit-code origin "nightly-testing-${MOST_RECENT_NIGHTLY}" >/dev/null; then
BASE="nightly-testing-${MOST_RECENT_NIGHTLY}"
else
echo "Couldn't find a 'nightly-testing-${MOST_RECENT_NIGHTLY}' branch in the reference manual. Falling back to 'nightly-testing'."
BASE=nightly-testing
fi
echo "Using base tag: $BASE"
EXISTS="$(git ls-remote --heads origin lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }} | wc -l)"
echo "Branch exists: $EXISTS"
if [ "$EXISTS" = "0" ]; then
echo "Branch does not exist, creating it."
git switch -c lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }} "$BASE"
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
git add lakefile.lean lake-manifest.json
git commit --allow-empty -m "Update lean-toolchain for testing https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
else
echo "Branch already exists, updating lean-toolchain."
git switch lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}
# The reference manual's `nightly-testing` branch or `nightly-testing-YYYY-MM-DD` tag may have moved since this branch was created, so merge their changes.
# (This should no longer be possible once `nightly-testing-YYYY-MM-DD` is a tag, but it is still safe to merge.)
git merge "$BASE" --strategy-option ours --no-commit --allow-unrelated-histories
echo "leanprover/lean4-pr-releases:pr-release-${{ steps.workflow-info.outputs.pullRequestNumber }}-${{ env.SHORT_SHA }}" > lean-toolchain
git add lean-toolchain
git add lake-manifest.json
git commit --allow-empty -m "Update lean-toolchain for https://github.com/leanprover/lean4/pull/${{ steps.workflow-info.outputs.pullRequestNumber }}"
fi
- name: Push changes
if: steps.workflow-info.outputs.pullRequestNumber != '' && steps.reference-manual-ready.outputs.manual_ready == 'true'
run: |
git push origin lean-pr-testing-${{ steps.workflow-info.outputs.pullRequestNumber }}

View File

@@ -10,11 +10,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check PR title
uses: actions/github-script@v8
uses: actions/github-script@v7
with:
script: |
const msg = context.payload.pull_request? context.payload.pull_request.title : context.payload.merge_group.head_commit.message;
console.log(`Message: ${msg}`)
if (!/^(feat|fix|doc|style|refactor|test|chore|perf): (?![A-Z][a-z]).*[^.]($|\n\n)/.test(msg)) {
if (!/^(feat|fix|doc|style|refactor|test|chore|perf): .*[^.]($|\n\n)/.test(msg)) {
core.setFailed('PR title does not follow the Commit Convention (https://leanprover.github.io/lean4/doc/dev/commit_convention.html).');
}

View File

@@ -11,7 +11,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v10
- uses: actions/stale@v9
with:
days-before-stale: -1
days-before-pr-stale: 30

View File

@@ -18,16 +18,12 @@ concurrency:
jobs:
update-stage0:
runs-on: nscloud-ubuntu-22.04-amd64-8x16
env:
CCACHE_DIR: ${{ github.workspace }}/.ccache
CCACHE_COMPRESS: true
CCACHE_MAXSIZE: 400M
runs-on: ubuntu-latest
steps:
# This action should push to an otherwise protected branch, so it
# uses a deploy key with write permissions, as suggested at
# https://stackoverflow.com/a/76135647/946226
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
ssh-key: ${{secrets.STAGE0_SSH_KEY}}
- run: echo "should_update_stage0=yes" >> "$GITHUB_ENV"
@@ -44,45 +40,34 @@ jobs:
run: |
git config --global user.name "Lean stage0 autoupdater"
git config --global user.email "<>"
# Would be nice, but does not work yet:
# https://github.com/DeterminateSystems/magic-nix-cache/issues/39
# This action does not run that often and building runs in a few minutes, so ok for now
#- if: env.should_update_stage0 == 'yes'
# uses: DeterminateSystems/magic-nix-cache-action@v2
- if: env.should_update_stage0 == 'yes'
name: Restore Build Cache
uses: actions/cache/restore@v4
with:
path: nix-store-cache
key: Nix Linux-nix-store-cache-${{ github.sha }}
# fall back to (latest) previous cache
restore-keys: |
Nix Linux-nix-store-cache
- if: env.should_update_stage0 == 'yes'
name: Further Set Up Nix Cache
shell: bash -euxo pipefail {0}
run: |
# Nix seems to mutate the cache, so make a copy
cp -r nix-store-cache nix-store-cache-copy || true
- if: env.should_update_stage0 == 'yes'
name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
- name: Open Nix shell once
if: env.should_update_stage0 == 'yes'
run: true
shell: 'nix develop -c bash -euxo pipefail {0}'
- name: Set up NPROC
if: env.should_update_stage0 == 'yes'
run: |
echo "NPROC=$(nproc 2>/dev/null || sysctl -n hw.logicalcpu 2>/dev/null || echo 4)" >> $GITHUB_ENV
shell: 'nix develop -c bash -euxo pipefail {0}'
- name: Restore Cache
if: env.should_update_stage0 == 'yes'
uses: actions/cache/restore@v4
with:
# NOTE: must be in sync with `restore-cache` in `build-template.yml`
path: |
.ccache
build/stage1/**/*.trace
build/stage1/**/*.olean*
build/stage1/**/*.ilean
build/stage1/**/*.ir
build/stage1/**/*.c
build/stage1/**/*.c.o*
key: Linux Lake-build-v4-${{ github.sha }}
# fall back to (latest) previous cache
restore-keys: |
Linux Lake-build-v4
extra-conf: |
substituters = file://${{ github.workspace }}/nix-store-cache-copy?priority=10&trusted=true https://cache.nixos.org
- if: env.should_update_stage0 == 'yes'
# sync options with `Linux Lake` to ensure cache reuse
run: |
mkdir -p build
cmake --preset release -B build -DLEAN_EXTRA_MAKE_OPTS=-DwarningAsError=true
shell: 'nix develop -c bash -euxo pipefail {0}'
- if: env.should_update_stage0 == 'yes'
run: |
make -j$NPROC -C build update-stage0-commit
shell: 'nix develop -c bash -euxo pipefail {0}'
run: nix run .#update-stage0-commit
- if: env.should_update_stage0 == 'yes'
run: git show --stat
- if: env.should_update_stage0 == 'yes' && github.event_name == 'push'

3
.gitignore vendored
View File

@@ -6,6 +6,7 @@
lake-manifest.json
/build
/src/lakefile.toml
/tests/lakefile.toml
/lakefile.toml
GPATH
GRTAGS
@@ -20,6 +21,7 @@ tasks.json
settings.json
.gdb_history
.vscode/*
!.vscode/settings.json
script/__pycache__
*.produced.out
CMakeSettings.json
@@ -30,4 +32,3 @@ fwOut.txt
wdErr.txt
wdIn.txt
wdOut.txt
downstream_releases/

View File

@@ -16,7 +16,7 @@ foreach(var ${vars})
list(APPEND STAGE1_ARGS "-D${CMAKE_MATCH_1}=${${var}}")
elseif("${currentHelpString}" MATCHES "No help, variable specified on the command line." OR "${currentHelpString}" STREQUAL "")
list(APPEND CL_ARGS "-D${var}=${${var}}")
if("${var}" MATCHES "USE_GMP|CHECK_OLEAN_VERSION|LEAN_VERSION_.*|LEAN_SPECIAL_VERSION_DESC")
if("${var}" MATCHES "USE_GMP|CHECK_OLEAN_VERSION")
# must forward options that generate incompatible .olean format
list(APPEND STAGE0_ARGS "-D${var}=${${var}}")
elseif("${var}" MATCHES "LLVM*|PKG_CONFIG|USE_LAKE|USE_MIMALLOC")
@@ -44,9 +44,7 @@ if (NOT ${CMAKE_SYSTEM_NAME} MATCHES "Emscripten")
set(CADICAL_CXX c++)
if (CADICAL_USE_CUSTOM_CXX)
set(CADICAL_CXX ${CMAKE_CXX_COMPILER})
# Use same platform flags as for Lean executables, in particular from `prepare-llvm-linux.sh`,
# but not Lean-specific `LEAN_EXTRA_CXX_FLAGS` such as fsanitize.
set(CADICAL_CXXFLAGS "${CMAKE_CXX_FLAGS}")
set(CADICAL_CXXFLAGS "${LEAN_EXTRA_CXX_FLAGS}")
set(CADICAL_LDFLAGS "-Wl,-rpath=\\$$ORIGIN/../lib")
endif()
find_program(CCACHE ccache)
@@ -149,10 +147,6 @@ add_custom_target(test
COMMAND $(MAKE) -C stage1 test
DEPENDS stage1)
add_custom_target(clean-stdlib
COMMAND $(MAKE) -C stage1 clean-stdlib
DEPENDS stage1)
install(CODE "execute_process(COMMAND make -C stage1 install)")
add_custom_target(check-stage3

View File

@@ -41,7 +41,7 @@
"SMALL_ALLOCATOR": "OFF",
"USE_MIMALLOC": "OFF",
"BSYMBOLIC": "OFF",
"LEAN_TEST_VARS": "MAIN_STACK_SIZE=16000 LSAN_OPTIONS=max_leaks=10"
"LEAN_TEST_VARS": "MAIN_STACK_SIZE=16000"
},
"generator": "Unix Makefiles",
"binaryDir": "${sourceDir}/build/sanitize"

View File

@@ -7,9 +7,9 @@
/.github/ @kim-em
/RELEASES.md @kim-em
/src/kernel/ @leodemoura
/src/library/compiler/ @hargoniX
/src/library/compiler/ @zwarich
/src/lake/ @tydeu
/src/Lean/Compiler/ @leodemoura @hargoniX
/src/Lean/Compiler/ @leodemoura @zwarich
/src/Lean/Data/Lsp/ @mhuisi
/src/Lean/Elab/Deriving/ @kim-em
/src/Lean/Elab/Tactic/ @kim-em
@@ -45,10 +45,3 @@
/src/Std/Tactic/BVDecide/ @hargoniX
/src/Lean/Elab/Tactic/BVDecide/ @hargoniX
/src/Std/Sat/ @hargoniX
/src/Std/Do @sgraf812
/src/Std/Tactic/Do @sgraf812
/src/Lean/Elab/Tactic/Do @sgraf812
/src/Init/Data/Range/Polymorphic @datokrat
/src/Init/Data/Slice @datokrat
/src/Init/Data/Iterators @datokrat
/src/Std/Data/Iterators @datokrat

View File

@@ -2,19 +2,19 @@ This is the repository for **Lean 4**.
# About
- [Quickstart](https://lean-lang.org/install/)
- [Quickstart](https://lean-lang.org/documentation/setup/)
- [Homepage](https://lean-lang.org)
- [Theorem Proving Tutorial](https://lean-lang.org/theorem_proving_in_lean4/)
- [Functional Programming in Lean](https://lean-lang.org/functional_programming_in_lean/)
- [Documentation Overview](https://lean-lang.org/learn/)
- [Documentation Overview](https://lean-lang.org/documentation/)
- [Language Reference](https://lean-lang.org/doc/reference/latest/)
- [Release notes](RELEASES.md) starting at v4.0.0-m3
- [Examples](https://lean-lang.org/examples/)
- [Examples](https://lean-lang.org/documentation/examples/)
- [External Contribution Guidelines](CONTRIBUTING.md)
# Installation
See [Install Lean](https://lean-lang.org/install/).
See [Setting Up Lean](https://lean-lang.org/documentation/setup/).
# Contributing

View File

@@ -1,6 +1,6 @@
# Lean Build Bootstrapping
Lean is a bootstrapped program: the
Since version 4, Lean is a partially bootstrapped program: most parts of the
frontend and compiler are written in Lean itself and thus need to be built before
building Lean itself - which is needed to again build those parts. This cycle is
broken by using pre-built C files checked into the repository (which ultimately
@@ -72,14 +72,6 @@ update the archived C source code of the stage 0 compiler in `stage0/src`.
The github repository will automatically update stage0 on `master` once
`src/stdlib_flags.h` and `stage0/src/stdlib_flags.h` are out of sync.
To trigger this, modify `stage0/src/stdlib_flags.h` (e.g., by adding or changing
a comment). When `update-stage0` runs, it will overwrite `stage0/src/stdlib_flags.h`
with the contents of `src/stdlib_flags.h`, bringing them back in sync.
NOTE: A full rebuild of stage 1 will only be triggered when the *committed* contents of `stage0/` are changed.
Thus if you change files in it manually instead of through `update-stage0-commit` (see below) or fetching updates from git, you either need to commit those changes first or run `make -C build/release clean-stdlib`.
The same is true for further stages except that a rebuild of them is retriggered on any committed change, not just to a specific directory.
Thus when debugging e.g. stage 2 failures, you can resume the build from these failures on but may want to explicitly call `clean-stdlib` to either observe changes from `.olean` files of modules that built successfully or to check that you did not break modules that built successfully at some prior point.
If you have write access to the lean4 repository, you can also manually
trigger that process, for example to be able to use new features in the compiler itself.
@@ -90,13 +82,13 @@ gh workflow run update-stage0.yml
```
Leaving stage0 updates to the CI automation is preferable, but should you need
to do it locally, you can use `make -C build/release update-stage0-commit` to
update `stage0` from `stage1` or `make -C build/release/stageN update-stage0-commit` to
to do it locally, you can use `make update-stage0-commit` in `build/release` to
update `stage0` from `stage1` or `make -C stageN update-stage0-commit` to
update from another stage. This command will automatically stage the updated files
and introduce a commit, so make sure to commit your work before that.
and introduce a commit,so make sure to commit your work before that.
If you rebased the branch (either onto a newer version of `master`, or fixing
up some commits prior to the stage0 update), recreate the stage0 update commits.
up some commits prior to the stage0 update, recreate the stage0 update commits.
The script `script/rebase-stage0.sh` can be used for that.
The CI should prevent PRs with changes to stage0 (besides `stdlib_flags.h`)

View File

@@ -1,9 +1,183 @@
# Foreign Function Interface
The Lean FFI documentation is now part of the [Lean language reference](https://lean-lang.org/doc/reference/latest/).
NOTE: The current interface was designed for internal use in Lean and should be considered **unstable**.
It will be refined and extended in the future.
* [General FFI](https://lean-lang.org/doc/reference/latest/find/?domain=Verso.Genre.Manual.section&name=ffi)
* [Representation of inductive types](https://lean-lang.org/doc/reference/latest/find/?domain=Verso.Genre.Manual.section&name=inductive-types-ffi)
* [String](https://lean-lang.org/doc/reference/latest/find/?domain=Verso.Genre.Manual.section&name=string-ffi)
* [Array](https://lean-lang.org/doc/reference/latest/find/?domain=Verso.Genre.Manual.section&name=array-ffi)
As Lean is written partially in Lean itself and partially in C++, it offers efficient interoperability between the two languages (or rather, between Lean and any language supporting C interfaces).
This support is however currently limited to transferring Lean data types; in particular, it is not possible yet to pass or return compound data structures such as C `struct`s by value from or to Lean.
There are two primary attributes for interoperating with other languages:
* `@[extern "sym"] constant leanSym : ...` binds a Lean declaration to the external symbol `sym`.
It can also be used with `def` to provide an internal definition, but ensuring consistency of both definitions is up to the user.
* `@[export sym] def leanSym : ...` exports `leanSym` under the unmangled symbol name `sym`.
For simple examples of how to call foreign code from Lean and vice versa, see <https://github.com/leanprover/lean4/blob/master/src/lake/examples/ffi> and <https://github.com/leanprover/lean4/blob/master/src/lake/examples/reverse-ffi>, respectively.
## The Lean ABI
The Lean Application Binary Interface (ABI) describes how the signature of a Lean declaration is encoded as a native calling convention.
It is based on the standard C ABI and calling convention of the target platform.
For a Lean declaration marked with either `@[extern "sym"]` or `@[export sym]` for some symbol name `sym`, let `α₁ → ... → αₙ → β` be the normalized declaration's type.
If `n` is 0, the corresponding C declaration is
```c
extern s sym;
```
where `s` is the C translation of `β` as specified in the next section.
In the case of an `@[extern]` definition, the symbol's value is guaranteed to be initialized only after calling the Lean module's initializer or that of an importing module; see [Initialization](#initialization).
If `n` is greater than 0, the corresponding C declaration is
```c
s sym(t, ..., tₘ);
```
where the parameter types `tᵢ` are the C translation of the `αᵢ` as in the next section.
In the case of `@[extern]` all *irrelevant* types are removed first; see next section.
### Translating Types from Lean to C
* The integer types `UInt8`, ..., `UInt64`, `USize` are represented by the C types `uint8_t`, ..., `uint64_t`, `size_t`, respectively
* `Char` is represented by `uint32_t`
* `Float` is represented by `double`
* An *enum* inductive type of at least 2 and at most 2^32 constructors, each of which with no parameters, is represented by the first type of `uint8_t`, `uint16_t`, `uint32_t` that is sufficient to represent all constructor indices.
For example, the type `Bool` is represented as `uint8_t` with values `0` for `false` and `1` for `true`.
* `Decidable α` is represented the same way as `Bool`
* An inductive type with a *trivial structure*, that is,
* it is none of the types described above
* it is not marked `unsafe`
* it has a single constructor with a single parameter of *relevant* type
is represented by the representation of that parameter's type.
For example, `{ x : α // p }`, the `Subtype` structure of a value of type `α` and an irrelevant proof, is represented by the representation of `α`.
Similarly, the signed integer types `Int8`, ..., `Int64`, `ISize` are also represented by the unsigned C types `uint8_t`, ..., `uint64_t`, `size_t`, respectively, because they have a trivial structure.
* `Nat` and `Int` are represented by `lean_object *`.
Their runtime values is either a pointer to an opaque bignum object or, if the lowest bit of the "pointer" is 1 (`lean_is_scalar`), an encoded unboxed natural number or integer (`lean_box`/`lean_unbox`).
* A universe `Sort u`, type constructor `... → Sort u`, or proposition `p : Prop` is *irrelevant* and is either statically erased (see above) or represented as a `lean_object *` with the runtime value `lean_box(0)`
* Any other type is represented by `lean_object *`.
Its runtime value is a pointer to an object of a subtype of `lean_object` (see the "Inductive types" section below) or the unboxed value `lean_box(cidx)` for the `cidx`th constructor of an inductive type if this constructor does not have any relevant parameters.
Example: the runtime value of `u : Unit` is always `lean_box(0)`.
#### Inductive types
For inductive types which are in the fallback `lean_object *` case above and not trivial constructors, the type is stored as a `lean_ctor_object`, and `lean_is_ctor` will return true. A `lean_ctor_object` stores the constructor index in the header, and the fields are stored in the `m_objs` portion of the object.
The memory order of the fields is derived from the types and order of the fields in the declaration. They are ordered as follows:
* Non-scalar fields stored as `lean_object *`
* Fields of type `USize`
* Other scalar fields, in decreasing order by size
Within each group the fields are ordered in declaration order. **Warning**: Trivial wrapper types still count toward a field being treated as non-scalar for this purpose.
* To access fields of the first kind, use `lean_ctor_get(val, i)` to get the `i`th non-scalar field.
* To access `USize` fields, use `lean_ctor_get_usize(val, n+i)` to get the `i`th usize field and `n` is the total number of fields of the first kind.
* To access other scalar fields, use `lean_ctor_get_uintN(val, off)` or `lean_ctor_get_usize(val, off)` as appropriate. Here `off` is the byte offset of the field in the structure, starting at `n*sizeof(void*)` where `n` is the number of fields of the first two kinds.
For example, a structure such as
```lean
structure S where
ptr_1 : Array Nat
usize_1 : USize
sc64_1 : UInt64
ptr_2 : { x : UInt64 // x > 0 } -- wrappers don't count as scalars
sc64_2 : Float -- `Float` is 64 bit
sc8_1 : Bool
sc16_1 : UInt16
sc8_2 : UInt8
sc64_3 : UInt64
usize_2 : USize
ptr_3 : Char -- trivial wrapper around `UInt32`
sc32_1 : UInt32
sc16_2 : UInt16
```
would get re-sorted into the following memory order:
* `S.ptr_1` - `lean_ctor_get(val, 0)`
* `S.ptr_2` - `lean_ctor_get(val, 1)`
* `S.ptr_3` - `lean_ctor_get(val, 2)`
* `S.usize_1` - `lean_ctor_get_usize(val, 3)`
* `S.usize_2` - `lean_ctor_get_usize(val, 4)`
* `S.sc64_1` - `lean_ctor_get_uint64(val, sizeof(void*)*5)`
* `S.sc64_2` - `lean_ctor_get_float(val, sizeof(void*)*5 + 8)`
* `S.sc64_3` - `lean_ctor_get_uint64(val, sizeof(void*)*5 + 16)`
* `S.sc32_1` - `lean_ctor_get_uint32(val, sizeof(void*)*5 + 24)`
* `S.sc16_1` - `lean_ctor_get_uint16(val, sizeof(void*)*5 + 28)`
* `S.sc16_2` - `lean_ctor_get_uint16(val, sizeof(void*)*5 + 30)`
* `S.sc8_1` - `lean_ctor_get_uint8(val, sizeof(void*)*5 + 32)`
* `S.sc8_2` - `lean_ctor_get_uint8(val, sizeof(void*)*5 + 33)`
### Borrowing
By default, all `lean_object *` parameters of an `@[extern]` function are considered *owned*, i.e. the external code is passed a "virtual RC token" and is responsible for passing this token along to another consuming function (exactly once) or freeing it via `lean_dec`.
To reduce reference counting overhead, parameters can be marked as *borrowed* by prefixing their type with `@&`.
Borrowed objects must only be passed to other non-consuming functions (arbitrarily often) or converted to owned values using `lean_inc`.
In `lean.h`, the `lean_object *` aliases `lean_obj_arg` and `b_lean_obj_arg` are used to mark this difference on the C side.
Return values and `@[export]` parameters are always owned at the moment.
## Initialization
When including Lean code as part of a larger program, modules must be *initialized* before accessing any of their declarations.
Module initialization entails
* initialization of all "constants" (nullary functions), including closed terms lifted out of other functions
* execution of all `[init]` functions
* execution of all `[builtin_init]` functions, if the `builtin` parameter of the module initializer has been set
The module initializer is automatically run with the `builtin` flag for executables compiled from Lean code and for "plugins" loaded with `lean --plugin`.
For all other modules imported by `lean`, the initializer is run without `builtin`.
Thus `[init]` functions are run iff their module is imported, regardless of whether they have native code available or not, while `[builtin_init]` functions are only run for native executable or plugins, regardless of whether their module is imported or not.
`lean` uses built-in initializers for e.g. registering basic parsers that should be available even without importing their module (which is necessary for bootstrapping).
The initializer for module `A.B` is called `initialize_A_B` and will automatically initialize any imported modules.
Module initializers are idempotent (when run with the same `builtin` flag), but not thread-safe.
Together with initialization of the Lean runtime, you should execute code like the following exactly once before accessing any Lean declarations:
```c
void lean_initialize_runtime_module();
void lean_initialize();
lean_object * initialize_A_B(uint8_t builtin, lean_object *);
lean_object * initialize_C(uint8_t builtin, lean_object *);
...
lean_initialize_runtime_module();
//lean_initialize(); // necessary (and replaces `lean_initialize_runtime_module`) if you (indirectly) access the `Lean` package
lean_object * res;
// use same default as for Lean executables
uint8_t builtin = 1;
res = initialize_A_B(builtin, lean_io_mk_world());
if (lean_io_result_is_ok(res)) {
lean_dec_ref(res);
} else {
lean_io_result_show_error(res);
lean_dec(res);
return ...; // do not access Lean declarations if initialization failed
}
res = initialize_C(builtin, lean_io_mk_world());
if (lean_io_result_is_ok(res)) {
...
//lean_init_task_manager(); // necessary if you (indirectly) use `Task`
lean_io_mark_end_initialization();
```
In addition, any other thread not spawned by the Lean runtime itself must be initialized for Lean use by calling
```c
void lean_initialize_thread();
```
and should be finalized in order to free all thread-local resources by calling
```c
void lean_finalize_thread();
```
## `@[extern]` in the Interpreter
The interpreter can run Lean declarations for which symbols are available in loaded shared libraries, which includes `@[extern]` declarations.
Thus to e.g. run `#eval` on such a declaration, you need to
1. compile (at least) the module containing the declaration and its dependencies into a shared library, and then
1. pass this library to `lean --load-dynlib=` to run code `import`ing this module.
Note that it is not sufficient to load the foreign library containing the external symbol because the interpreter depends on code that is emitted for each `@[extern]` declaration.
Thus it is not possible to interpret an `@[extern]` declaration in the same file.
See [`tests/compiler/foreign`](https://github.com/leanprover/lean4/tree/master/tests/compiler/foreign/) for an example.

View File

@@ -8,8 +8,8 @@ You should not edit the `stage0` directory except using the commands described i
## Development Setup
You can use any of the [supported editors](https://lean-lang.org/install/manual/) for editing the Lean source code.
Please see below for specific instructions for VS Code.
You can use any of the [supported editors](../setup.md) for editing the Lean source code.
If you set up `elan` as below, opening `src/` as a *workspace folder* should ensure that stage 0 (i.e. the stage that first compiles `src/`) will be used for files in that directory.
### Dev setup using elan
@@ -68,10 +68,6 @@ code lean.code-workspace
```
on the command line.
You can use the `Refresh File Dependencies` command as in other projects to rebuild modules from inside VS Code but be aware that this does not trigger any non-Lake build targets.
In particular, after updating `stage0/` (or fetching an update to it), you will want to invoke `make` directly to rebuild `stage0/bin/lean` as described in [building Lean](../make/index.md).
You should then run the `Restart Server` command to update all open files and the server watchdog process as well.
### `ccache`
Lean's build process uses [`ccache`](https://ccache.dev/) if it is
@@ -89,29 +85,5 @@ such that changing files in `Init` doesn't force a full rebuild of `Lean`.
You can test a Lean PR against Mathlib and Batteries by rebasing your PR
on to `nightly-with-mathlib` branch. (It is fine to force push after rebasing.)
CI will generate a branch of Mathlib and Batteries called `lean-pr-testing-NNNN`
on the `leanprover-community/mathlib4-nightly-testing` fork of Mathlib.
This branch uses the toolchain for your PR, and will report back to the Lean PR with results from Mathlib CI.
that uses the toolchain for your PR, and will report back to the Lean PR with results from Mathlib CI.
See https://leanprover-community.github.io/contribute/tags_and_branches.html for more details.
### Testing against the Lean Language Reference
You can test a Lean PR against the reference manual by rebasing your PR
on to `nightly-with-manual` branch. (It is fine to force push after rebasing.)
CI will generate a branch of the reference manual called `lean-pr-testing-NNNN`
in `leanprover/reference-manual`. This branch uses the toolchain for your PR,
and will report back to the Lean PR with results from Mathlib CI.
### Avoiding rebuilds for downstream projects
If you want to test changes to Lean on downstream projects and would like to avoid rebuilding modules you have already built/fetched using the project's configured Lean toolchain, you can often do so as long as your build of Lean is close enough to that Lean toolchain (compatible .olean format including structure of all relevant environment extensions).
To override the toolchain without rebuilding for a single command, for example `lake build` or `lake lean`, you can use the prefix
```
LEAN_GITHASH=$(lean --githash) lake +lean4 ...
```
Alternatively, use
```
export LEAN_GITHASH=$(lean --githash)
export ELAN_TOOLCHAIN=lean4
```
to persist these changes for the lifetime of the current shell, which will affect any processes spawned from it such as VS Code started via `code .`.
If you use a setup where you cannot directly start your editor from the command line, such as VS Code Remote, you might want to consider using [direnv](https://direnv.net/) together with an editor extension for it instead so that you can put the lines above into `.envrc`.

View File

@@ -50,7 +50,7 @@ We'll use `v4.6.0` as the intended release version as a running example.
- Re-running `script/release_checklist.py` will then create the tag `v4.6.0` from `master`/`main` and push it (unless `toolchain-tag: false` in the `release_repos.yml` file)
- `script/release_checklist.py` will then merge the tag `v4.6.0` into the `stable` branch and push it (unless `stable-branch: false` in the `release_repos.yml` file).
- Special notes on repositories with exceptional requirements:
- `doc-gen4` has additional dependencies which we do not update at each toolchain release, although occasionally these break and need to be updated manually.
- `doc-gen4` has addition dependencies which we do not update at each toolchain release, although occasionally these break and need to be updated manually.
- `verso`:
- The `subverso` dependency is unusual in that it needs to be compatible with _every_ Lean release simultaneously.
Usually you don't need to do anything.
@@ -69,10 +69,6 @@ We'll use `v4.6.0` as the intended release version as a running example.
- `repl`:
There are two copies of `lean-toolchain`/`lakefile.lean`:
in the root, and in `test/Mathlib/`. Edit both, and run `lake update` in both directories.
- `lean-fro.org`:
After updating the toolchains and running `lake update`, you must run `scripts/update.sh` to regenerate
the site content. This script updates generated files that depend on the Lean version.
The `release_steps.py` script handles this automatically.
- An awkward situation that sometimes occurs (e.g. with Verso) is that the `master`/`main` branch has already been moved
to a nightly toolchain that comes *after* the stable toolchain we are
targeting. In this case it is necessary to create a branch `releases/v4.6.0` from the last commit which was on
@@ -98,8 +94,6 @@ We'll use `v4.6.0` as the intended release version as a running example.
This checklist walks you through creating the first release candidate for a version of Lean.
For subsequent release candidates, the process is essentially the same, but we start out with the `releases/v4.7.0` branch already created.
We'll use `v4.7.0-rc1` as the intended release version in this example.
- Decide which nightly release you want to turn into a release candidate.
@@ -118,7 +112,7 @@ We'll use `v4.7.0-rc1` as the intended release version in this example.
git fetch nightly tag nightly-2024-02-29
git checkout nightly-2024-02-29
git checkout -b releases/v4.7.0
git push --set-upstream origin releases/v4.7.0
git push --set-upstream origin releases/v4.18.0
```
- In `src/CMakeLists.txt`,
- verify that you see `set(LEAN_VERSION_MINOR 7)` (for whichever `7` is appropriate); this should already have been updated when the development cycle began.

View File

@@ -51,10 +51,6 @@ All these tests are included by [src/shell/CMakeLists.txt](https://github.com/le
codes and do not check the expected output even though output is
produced, it is ignored.
**Note:** Tests in this directory run with `-Dlinter.all=false` to reduce noise.
If your test needs to verify linter behavior (e.g., deprecation warnings),
explicitly enable the relevant linter with `set_option linter.<name> true`.
- [`tests/lean/interactive`](https://github.com/leanprover/lean4/tree/master/tests/lean/interactive/): are designed to test server requests at a
given position in the input file. Each .lean file contains comments
that indicate how to simulate a client request at that position.
@@ -63,7 +59,7 @@ All these tests are included by [src/shell/CMakeLists.txt](https://github.com/le
open Foo in
theorem tst2 (h : a ≤ b) : a + 2 ≤ b + 2 :=
Bla.
--^ completion
--^ textDocument/completion
```
In this example, the test driver [`test_single.sh`](https://github.com/leanprover/lean4/tree/master/tests/lean/interactive/test_single.sh) will simulate an
auto-completion request at `Bla.`. The expected output is stored in

View File

@@ -282,7 +282,7 @@ theorem BinTree.find_insert_of_ne (b : BinTree β) (ne : k ≠ k') (v : β)
let t, h := b; simp
induction t with simp
| leaf =>
intro le
intros le
exact Nat.lt_of_le_of_ne le ne
| node left key value right ihl ihr =>
let .node hl hr bl br := h

View File

@@ -94,8 +94,10 @@ theorem List.palindrome_of_eq_reverse (h : as.reverse = as) : Palindrome as := b
next => exact Palindrome.nil
next a => exact Palindrome.single a
next a b as ih =>
obtain rfl, h, - := by simpa using h
exact Palindrome.sandwich b (ih h)
have : a = b := by simp_all
subst this
have : as.reverse = as := by simp_all
exact Palindrome.sandwich a (ih this)
/-!
We now define a function that returns `true` iff `as` is a palindrome.

View File

@@ -1,6 +1,6 @@
These are instructions to set up a working development environment for those who wish to make changes to Lean itself. It is part of the [Development Guide](../dev/index.md).
We strongly suggest that new users instead follow the [Installation Instructions](https://lean-lang.org/install/) to get started using Lean, since this sets up an environment that can automatically manage multiple Lean toolchain versions, which is necessary when working within the Lean ecosystem.
We strongly suggest that new users instead follow the [Quickstart](../quickstart.md) to get started using Lean, since this sets up an environment that can automatically manage multiple Lean toolchain versions, which is necessary when working within the Lean ecosystem.
Requirements
------------
@@ -44,12 +44,12 @@ Useful CMake Configuration Settings
Pass these along with the `cmake --preset release` command.
There are also two alternative presets that combine some of these options you can use instead of `release`: `debug` and `sandebug` (sanitize + debug).
* `-DCMAKE_BUILD_TYPE=`\
* `-D CMAKE_BUILD_TYPE=`\
Select the build type. Valid values are `RELEASE` (default), `DEBUG`,
`RELWITHDEBINFO`, and `MINSIZEREL`.
* `-DCMAKE_C_COMPILER=`\
`-DCMAKE_CXX_COMPILER=`\
* `-D CMAKE_C_COMPILER=`\
`-D CMAKE_CXX_COMPILER=`\
Select the C/C++ compilers to use. Official Lean releases currently use Clang;
see also `.github/workflows/ci.yml` for the CI config.

View File

@@ -1,4 +1,4 @@
# Install Packages on OS X
# Install Packages on OS X 14.5
We assume that you are using [homebrew][homebrew] as a package manager.
@@ -6,23 +6,23 @@ We assume that you are using [homebrew][homebrew] as a package manager.
## Compilers
You need a C++14-compatible compiler to build Lean. As of July
2025, you have three options:
You need a C++11-compatible compiler to build Lean. As of November
2014, you have three options:
- clang++ shipped with OSX (at time of writing v17.0.0)
- clang++ via homebrew (at time of writing, v20.1.8)
- gcc via homebrew (at time of writing, v15.1.0)
- clang++-3.5 (shipped with OSX, Apple LLVM version 6.0)
- gcc-4.9.1 (homebrew)
- clang++-3.5 (homebrew)
We recommend to use Apple's clang++ because it is pre-shipped with OS
X and requires no further installation.
To install gcc via homebrew, please execute:
To install gcc-4.9.1 via homebrew, please execute:
```bash
brew install gcc
```
To install clang via homebrew, please execute:
To install clang++-3.5 via homebrew, please execute:
```bash
brew install llvm lld
brew install llvm
```
To use compilers other than the default one (Apple's clang++), you
need to use `-DCMAKE_CXX_COMPILER` option to specify the compiler

View File

@@ -1,9 +0,0 @@
# The Lean standard library
This directory contains development information about the Lean standard library. The user-facing documentation of the standard library
is part of the [Lean Language Reference](https://lean-lang.org/doc/reference/latest/).
Here you will find
* the [standard library vision document](./vision.md), including the call for contributions,
* the [standard library style guide](./style.md), and
* the [standard library naming conventions](./naming.md).

View File

@@ -1,4 +0,0 @@
/.lake
!lake-manifest.json
metadata.json
invalidated.json

View File

@@ -1,24 +0,0 @@
import Grove.Framework
import GroveStdlib.Generated.«associative-query-operations»
import GroveStdlib.Generated.«associative-creation-operations»
import GroveStdlib.Generated.«associative-modification-operations»
import GroveStdlib.Generated.«associative-create-then-query»
import GroveStdlib.Generated.«associative-all-operations-covered»
import GroveStdlib.Generated.«slice-producing»
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated
def restoreState : RestoreStateM Unit := do
«associative-query-operations».restoreState
«associative-creation-operations».restoreState
«associative-modification-operations».restoreState
«associative-create-then-query».restoreState
«associative-all-operations-covered».restoreState
«slice-producing».restoreState

View File

@@ -1,34 +0,0 @@
import Grove.Framework
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated.«associative-all-operations-covered»
def «all-covered» : Assertion.Fact where
widgetId := "associative-all-operations-covered"
factId := "all-covered"
assertionId := "all-covered"
state := {
assertionId := "all-covered"
description := "All operations should be covered"
passed := false
message := "There were 19697 operations that were not covered."
}
metadata := {
status := .bad
comment := "Still missing some!"
}
def table : Assertion.Data where
widgetId := "associative-all-operations-covered"
facts := #[
«all-covered»,
]
def restoreState : RestoreStateM Unit := do
addAssertion table

View File

@@ -1,357 +0,0 @@
import Grove.Framework
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated.«associative-create-then-query»
def «2cb3c441-9663-4ce7-9527-0f40fc29925a:::01f88623-fa5f-4380-9772-b30f2fec5c94:::Std.DHashMap::Std.DHashMap.Raw::Std.ExtDHashMap::Std.DTreeMap::Std.DTreeMap.Raw::Std.ExtDTreeMap» : Table.Fact .subexpression .subexpression .declaration where
widgetId := "associative-create-then-query"
factId := "2cb3c441-9663-4ce7-9527-0f40fc29925a:::01f88623-fa5f-4380-9772-b30f2fec5c94:::Std.DHashMap::Std.DHashMap.Raw::Std.ExtDHashMap::Std.DTreeMap::Std.DTreeMap.Raw::Std.ExtDTreeMap"
rowAssociationId := "2cb3c441-9663-4ce7-9527-0f40fc29925a"
columnAssociationId := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedLayers := #["Std.DHashMap", "Std.DHashMap.Raw", "Std.ExtDHashMap", "Std.DTreeMap", "Std.DTreeMap.Raw", "Std.ExtDTreeMap", ]
layerStates := #[
{
layerIdentifier := "Std.DHashMap"
rowState :=
some "Std.DHashMap.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.emptyWithCapacity,
renderedStatement := "Std.DHashMap.emptyWithCapacity.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (capacity : Nat := 8) : Std.DHashMap α β",
isDeprecated := false })
columnState :=
some "Std.DHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.isEmpty,
renderedStatement := "Std.DHashMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.DHashMap α β) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
{
layerIdentifier := "Std.DHashMap.Raw"
rowState :=
some "Std.DHashMap.Raw.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.emptyWithCapacity,
renderedStatement := "Std.DHashMap.Raw.emptyWithCapacity.{u, v} {α : Type u} {β : α → Type v} (capacity : Nat := 8) :\n Std.DHashMap.Raw α β",
isDeprecated := false })
columnState :=
some "Std.DHashMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.isEmpty,
renderedStatement := "Std.DHashMap.Raw.isEmpty.{u, v} {α : Type u} {β : α → Type v} (m : Std.DHashMap.Raw α β) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
{
layerIdentifier := "Std.ExtDHashMap"
rowState :=
some "Std.ExtDHashMap.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.emptyWithCapacity,
renderedStatement := "Std.ExtDHashMap.emptyWithCapacity.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (capacity : Nat := 8) : Std.ExtDHashMap α β",
isDeprecated := false })
columnState :=
some "Std.ExtDHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.isEmpty,
renderedStatement := "Std.ExtDHashMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n [EquivBEq α] [LawfulHashable α] (m : Std.ExtDHashMap α β) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
{
layerIdentifier := "Std.DTreeMap"
rowState :=
some "Std.DTreeMap.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.empty,
renderedStatement := "Std.DTreeMap.empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n Std.DTreeMap α β cmp",
isDeprecated := false })
columnState :=
some "Std.DTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.isEmpty,
renderedStatement := "Std.DTreeMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap α β cmp) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
{
layerIdentifier := "Std.DTreeMap.Raw"
rowState :=
some "Std.DTreeMap.Raw.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.empty,
renderedStatement := "Std.DTreeMap.Raw.empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n Std.DTreeMap.Raw α β cmp",
isDeprecated := false })
columnState :=
some "Std.DTreeMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.isEmpty,
renderedStatement := "Std.DTreeMap.Raw.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap.Raw α β cmp) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
{
layerIdentifier := "Std.ExtDTreeMap"
rowState :=
some "Std.ExtDTreeMap.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.empty,
renderedStatement := "Std.ExtDTreeMap.empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n Std.ExtDTreeMap α β cmp",
isDeprecated := false })
columnState :=
some "Std.ExtDTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.isEmpty,
renderedStatement := "Std.ExtDTreeMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.ExtDTreeMap α β cmp) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
]
metadata := {
status := .done
comment := "Not necessary for `ExtDHashMap` because of simp lemma turning into varno"
}
def «5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d:::01f88623-fa5f-4380-9772-b30f2fec5c94:::Std.DHashMap::Std.DHashMap.Raw::Std.ExtDHashMap::Std.DTreeMap::Std.DTreeMap.Raw::Std.ExtDTreeMap» : Table.Fact .subexpression .subexpression .declaration where
widgetId := "associative-create-then-query"
factId := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d:::01f88623-fa5f-4380-9772-b30f2fec5c94:::Std.DHashMap::Std.DHashMap.Raw::Std.ExtDHashMap::Std.DTreeMap::Std.DTreeMap.Raw::Std.ExtDTreeMap"
rowAssociationId := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
columnAssociationId := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedLayers := #["Std.DHashMap", "Std.DHashMap.Raw", "Std.ExtDHashMap", "Std.DTreeMap", "Std.DTreeMap.Raw", "Std.ExtDTreeMap", ]
layerStates := #[
{
layerIdentifier := "Std.DHashMap"
rowState :=
some "app (EmptyCollection.emptyCollection) (Std.DHashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DHashMap*)", displayShort := "" }
columnState :=
some "Std.DHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.isEmpty,
renderedStatement := "Std.DHashMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.DHashMap α β) : Bool",
isDeprecated := false })
selectedCellStates := #[
"Std.DHashMap.isEmpty_empty", Grove.Framework.Declaration.thm
{ name := `Std.DHashMap.isEmpty_empty,
renderedStatement := "Std.DHashMap.isEmpty_empty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α} :\n ∅.isEmpty = true",
isSimp := true,
isDeprecated := false }
,
]
},
{
layerIdentifier := "Std.DHashMap.Raw"
rowState :=
some "app (EmptyCollection.emptyCollection) (Std.DHashMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DHashMap.Raw*)", displayShort := "" }
columnState :=
some "Std.DHashMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.isEmpty,
renderedStatement := "Std.DHashMap.Raw.isEmpty.{u, v} {α : Type u} {β : α → Type v} (m : Std.DHashMap.Raw α β) : Bool",
isDeprecated := false })
selectedCellStates := #[
"Std.DHashMap.Raw.isEmpty_emptyc", Grove.Framework.Declaration.thm
{ name := `Std.DHashMap.Raw.isEmpty_emptyc,
renderedStatement := "Std.DHashMap.Raw.isEmpty_emptyc.{u_1, u_2} {α : Type u_1} {β : α → Type u_2} [BEq α] [Hashable α] :\n ∅.isEmpty = true",
isSimp := false,
isDeprecated := true }
,
]
},
{
layerIdentifier := "Std.ExtDHashMap"
rowState :=
some "app (EmptyCollection.emptyCollection) (Std.ExtDHashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtDHashMap*)", displayShort := "" }
columnState :=
some "Std.ExtDHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.isEmpty,
renderedStatement := "Std.ExtDHashMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n [EquivBEq α] [LawfulHashable α] (m : Std.ExtDHashMap α β) : Bool",
isDeprecated := false })
selectedCellStates := #[
]
},
{
layerIdentifier := "Std.DTreeMap"
rowState :=
some "app (EmptyCollection.emptyCollection) (Std.DTreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DTreeMap*)", displayShort := "" }
columnState :=
some "Std.DTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.isEmpty,
renderedStatement := "Std.DTreeMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap α β cmp) : Bool",
isDeprecated := false })
selectedCellStates := #[
"Std.DTreeMap.isEmpty_emptyc", Grove.Framework.Declaration.thm
{ name := `Std.DTreeMap.isEmpty_emptyc,
renderedStatement := "Std.DTreeMap.isEmpty_emptyc.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n ∅.isEmpty = true",
isSimp := true,
isDeprecated := false }
,
]
},
{
layerIdentifier := "Std.DTreeMap.Raw"
rowState :=
some "app (EmptyCollection.emptyCollection) (Std.DTreeMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DTreeMap.Raw*)", displayShort := "" }
columnState :=
some "Std.DTreeMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.isEmpty,
renderedStatement := "Std.DTreeMap.Raw.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap.Raw α β cmp) : Bool",
isDeprecated := false })
selectedCellStates := #[
"Std.DTreeMap.Raw.isEmpty_emptyc", Grove.Framework.Declaration.thm
{ name := `Std.DTreeMap.Raw.isEmpty_emptyc,
renderedStatement := "Std.DTreeMap.Raw.isEmpty_emptyc.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n ∅.isEmpty = true",
isSimp := true,
isDeprecated := false }
,
]
},
{
layerIdentifier := "Std.ExtDTreeMap"
rowState :=
some "app (EmptyCollection.emptyCollection) (Std.ExtDTreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtDTreeMap*)", displayShort := "" }
columnState :=
some "Std.ExtDTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.isEmpty,
renderedStatement := "Std.ExtDTreeMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.ExtDTreeMap α β cmp) : Bool",
isDeprecated := false })
selectedCellStates := #[
"Std.ExtDTreeMap.isEmpty_empty", Grove.Framework.Declaration.thm
{ name := `Std.ExtDTreeMap.isEmpty_empty,
renderedStatement := "Std.ExtDTreeMap.isEmpty_empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n ∅.isEmpty = true",
isSimp := true,
isDeprecated := false }
,
]
},
]
metadata := {
status := .bad
comment := "Missing for `ExtDHashMap`"
}
def table : Table.Data .subexpression .subexpression .declaration where
widgetId := "associative-create-then-query"
selectedRowAssociations := #["2cb3c441-9663-4ce7-9527-0f40fc29925a", "7743a485-024d-43b6-bd5f-ebd3182eb94d", "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d", ]
selectedColumnAssociations := #["01f88623-fa5f-4380-9772-b30f2fec5c94", "f084f852-af71-45b6-8ab3-d251a8144f72", ]
selectedLayers := #["Std.DHashMap", "Std.DHashMap.Raw", "Std.ExtDHashMap", "Std.DTreeMap", "Std.DTreeMap.Raw", "Std.ExtDTreeMap", ]
selectedCellOptions := #[
{
layerIdentifier := "Std.DHashMap"
rowValue := "2cb3c441-9663-4ce7-9527-0f40fc29925a"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.DHashMap.isEmpty_emptyWithCapacity", ]
},
{
layerIdentifier := "Std.DHashMap.Raw"
rowValue := "2cb3c441-9663-4ce7-9527-0f40fc29925a"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.DHashMap.Raw.isEmpty_emptyWithCapacity", ]
},
{
layerIdentifier := "Std.DHashMap"
rowValue := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.DHashMap.isEmpty_empty", ]
},
{
layerIdentifier := "Std.DHashMap.Raw"
rowValue := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.DHashMap.Raw.isEmpty_emptyc", ]
},
{
layerIdentifier := "Std.DTreeMap"
rowValue := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.DTreeMap.isEmpty_emptyc", ]
},
{
layerIdentifier := "Std.DTreeMap.Raw"
rowValue := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.DTreeMap.Raw.isEmpty_emptyc", ]
},
{
layerIdentifier := "Std.ExtDTreeMap"
rowValue := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
columnValue := "01f88623-fa5f-4380-9772-b30f2fec5c94"
selectedCellOptions := #["Std.ExtDTreeMap.isEmpty_empty", ]
},
]
facts := #[
«2cb3c441-9663-4ce7-9527-0f40fc29925a:::01f88623-fa5f-4380-9772-b30f2fec5c94:::Std.DHashMap::Std.DHashMap.Raw::Std.ExtDHashMap::Std.DTreeMap::Std.DTreeMap.Raw::Std.ExtDTreeMap»,
«5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d:::01f88623-fa5f-4380-9772-b30f2fec5c94:::Std.DHashMap::Std.DHashMap.Raw::Std.ExtDHashMap::Std.DTreeMap::Std.DTreeMap.Raw::Std.ExtDTreeMap»,
]
def restoreState : RestoreStateM Unit := do
addTable table

View File

@@ -1,216 +0,0 @@
import Grove.Framework
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated.«associative-creation-operations»
def «2cb3c441-9663-4ce7-9527-0f40fc29925a» : AssociationTable.Fact .subexpression where
widgetId := "associative-creation-operations"
factId := "2cb3c441-9663-4ce7-9527-0f40fc29925a"
rowId := "2cb3c441-9663-4ce7-9527-0f40fc29925a"
rowState := #["Std.DHashMap", "Std.DHashMap.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.emptyWithCapacity,
renderedStatement := "Std.DHashMap.emptyWithCapacity.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (capacity : Nat := 8) : Std.DHashMap α β",
isDeprecated := false }),"Std.DHashMap.Raw", "Std.DHashMap.Raw.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.emptyWithCapacity,
renderedStatement := "Std.DHashMap.Raw.emptyWithCapacity.{u, v} {α : Type u} {β : α → Type v} (capacity : Nat := 8) :\n Std.DHashMap.Raw α β",
isDeprecated := false }),"Std.ExtDHashMap", "Std.ExtDHashMap.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.emptyWithCapacity,
renderedStatement := "Std.ExtDHashMap.emptyWithCapacity.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (capacity : Nat := 8) : Std.ExtDHashMap α β",
isDeprecated := false }),"Std.DTreeMap", "Std.DTreeMap.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.empty,
renderedStatement := "Std.DTreeMap.empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n Std.DTreeMap α β cmp",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.empty,
renderedStatement := "Std.DTreeMap.Raw.empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n Std.DTreeMap.Raw α β cmp",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.empty,
renderedStatement := "Std.ExtDTreeMap.empty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} :\n Std.ExtDTreeMap α β cmp",
isDeprecated := false }),"Std.HashMap", "Std.HashMap.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.emptyWithCapacity,
renderedStatement := "Std.HashMap.emptyWithCapacity.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α]\n (capacity : Nat := 8) : Std.HashMap α β",
isDeprecated := false }),"Std.HashMap.Raw", "Std.HashMap.Raw.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.Raw.emptyWithCapacity,
renderedStatement := "Std.HashMap.Raw.emptyWithCapacity.{u, v} {α : Type u} {β : Type v} (capacity : Nat := 8) :\n Std.HashMap.Raw α β",
isDeprecated := false }),"Std.ExtHashMap", "Std.ExtHashMap.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashMap.emptyWithCapacity,
renderedStatement := "Std.ExtHashMap.emptyWithCapacity.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α]\n (capacity : Nat := 8) : Std.ExtHashMap α β",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.empty,
renderedStatement := "Std.TreeMap.empty.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} : Std.TreeMap α β cmp",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.empty,
renderedStatement := "Std.TreeMap.Raw.empty.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} :\n Std.TreeMap.Raw α β cmp",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.empty,
renderedStatement := "Std.ExtTreeMap.empty.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} :\n Std.ExtTreeMap α β cmp",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.emptyWithCapacity,
renderedStatement := "Std.HashSet.emptyWithCapacity.{u} {α : Type u} [BEq α] [Hashable α] (capacity : Nat := 8) :\n Std.HashSet α",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.emptyWithCapacity,
renderedStatement := "Std.HashSet.Raw.emptyWithCapacity.{u} {α : Type u} (capacity : Nat := 8) : Std.HashSet.Raw α",
isDeprecated := false }),"Std.ExtHashSet", "Std.ExtHashSet.emptyWithCapacity", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashSet.emptyWithCapacity,
renderedStatement := "Std.ExtHashSet.emptyWithCapacity.{u} {α : Type u} [BEq α] [Hashable α] (capacity : Nat := 8) :\n Std.ExtHashSet α",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.empty,
renderedStatement := "Std.TreeSet.empty.{u} {α : Type u} {cmp : αα → Ordering} : Std.TreeSet α cmp",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.empty,
renderedStatement := "Std.TreeSet.Raw.empty.{u} {α : Type u} {cmp : αα → Ordering} : Std.TreeSet.Raw α cmp",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.empty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.empty,
renderedStatement := "Std.ExtTreeSet.empty.{u} {α : Type u} {cmp : αα → Ordering} : Std.ExtTreeSet α cmp",
isDeprecated := false }),]
metadata := {
status := .done
comment := ""
}
def «7743a485-024d-43b6-bd5f-ebd3182eb94d» : AssociationTable.Fact .subexpression where
widgetId := "associative-creation-operations"
factId := "7743a485-024d-43b6-bd5f-ebd3182eb94d"
rowId := "7743a485-024d-43b6-bd5f-ebd3182eb94d"
rowState := #["Std.DHashMap", "Std.DHashMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.ofList,
renderedStatement := "Std.DHashMap.ofList.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (l : List ((a : α) × β a)) : Std.DHashMap α β",
isDeprecated := false }),"Std.DHashMap.Raw", "Std.DHashMap.Raw.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.ofList,
renderedStatement := "Std.DHashMap.Raw.ofList.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (l : List ((a : α) × β a)) : Std.DHashMap.Raw α β",
isDeprecated := false }),"Std.ExtDHashMap", "Std.ExtDHashMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.ofList,
renderedStatement := "Std.ExtDHashMap.ofList.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α]\n (l : List ((a : α) × β a)) : Std.ExtDHashMap α β",
isDeprecated := false }),"Std.DTreeMap", "Std.DTreeMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.ofList,
renderedStatement := "Std.DTreeMap.ofList.{u, v} {α : Type u} {β : α → Type v} (l : List ((a : α) × β a))\n (cmp : αα → Ordering := by exact compare) : Std.DTreeMap α β cmp",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.ofList,
renderedStatement := "Std.DTreeMap.Raw.ofList.{u, v} {α : Type u} {β : α → Type v} (l : List ((a : α) × β a))\n (cmp : αα → Ordering := by exact compare) : Std.DTreeMap.Raw α β cmp",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.ofList,
renderedStatement := "Std.ExtDTreeMap.ofList.{u, v} {α : Type u} {β : α → Type v} (l : List ((a : α) × β a))\n (cmp : αα → Ordering := by exact compare) : Std.ExtDTreeMap α β cmp",
isDeprecated := false }),"Std.HashMap", "Std.HashMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.ofList,
renderedStatement := "Std.HashMap.ofList.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α] (l : List (α × β)) :\n Std.HashMap α β",
isDeprecated := false }),"Std.HashMap.Raw", "Std.HashMap.Raw.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.Raw.ofList,
renderedStatement := "Std.HashMap.Raw.ofList.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α] (l : List (α × β)) :\n Std.HashMap.Raw α β",
isDeprecated := false }),"Std.ExtHashMap", "Std.ExtHashMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashMap.ofList,
renderedStatement := "Std.ExtHashMap.ofList.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α] (l : List (α × β)) :\n Std.ExtHashMap α β",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.ofList,
renderedStatement := "Std.TreeMap.ofList.{u, v} {α : Type u} {β : Type v} (l : List (α × β))\n (cmp : αα → Ordering := by exact compare) : Std.TreeMap α β cmp",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.ofList,
renderedStatement := "Std.TreeMap.Raw.ofList.{u, v} {α : Type u} {β : Type v} (l : List (α × β))\n (cmp : αα → Ordering := by exact compare) : Std.TreeMap.Raw α β cmp",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.ofList,
renderedStatement := "Std.ExtTreeMap.ofList.{u, v} {α : Type u} {β : Type v} (l : List (α × β))\n (cmp : αα → Ordering := by exact compare) : Std.ExtTreeMap α β cmp",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.ofList,
renderedStatement := "Std.HashSet.ofList.{u} {α : Type u} [BEq α] [Hashable α] (l : List α) : Std.HashSet α",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.ofList,
renderedStatement := "Std.HashSet.Raw.ofList.{u} {α : Type u} [BEq α] [Hashable α] (l : List α) : Std.HashSet.Raw α",
isDeprecated := false }),"Std.ExtHashSet", "Std.ExtHashSet.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashSet.ofList,
renderedStatement := "Std.ExtHashSet.ofList.{u} {α : Type u} [BEq α] [Hashable α] (l : List α) : Std.ExtHashSet α",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.ofList,
renderedStatement := "Std.TreeSet.ofList.{u} {α : Type u} (l : List α) (cmp : αα → Ordering := by exact compare) :\n Std.TreeSet α cmp",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.ofList,
renderedStatement := "Std.TreeSet.Raw.ofList.{u} {α : Type u} (l : List α) (cmp : αα → Ordering := by exact compare) :\n Std.TreeSet.Raw α cmp",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.ofList", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.ofList,
renderedStatement := "Std.ExtTreeSet.ofList.{u} {α : Type u} (l : List α) (cmp : αα → Ordering := by exact compare) :\n Std.ExtTreeSet α cmp",
isDeprecated := false }),]
metadata := {
status := .done
comment := ""
}
def «5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d» : AssociationTable.Fact .subexpression where
widgetId := "associative-creation-operations"
factId := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
rowId := "5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d"
rowState := #["Std.DHashMap", "app (EmptyCollection.emptyCollection) (Std.DHashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DHashMap*)", displayShort := "" },"Std.DHashMap.Raw", "app (EmptyCollection.emptyCollection) (Std.DHashMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DHashMap.Raw*)", displayShort := "" },"Std.ExtDHashMap", "app (EmptyCollection.emptyCollection) (Std.ExtDHashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtDHashMap*)", displayShort := "" },"Std.DTreeMap", "app (EmptyCollection.emptyCollection) (Std.DTreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DTreeMap*)", displayShort := "" },"Std.DTreeMap.Raw", "app (EmptyCollection.emptyCollection) (Std.DTreeMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.DTreeMap.Raw*)", displayShort := "" },"Std.ExtDTreeMap", "app (EmptyCollection.emptyCollection) (Std.ExtDTreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtDTreeMap*)", displayShort := "" },"Std.HashMap", "app (EmptyCollection.emptyCollection) (Std.HashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.HashMap*)", displayShort := "" },"Std.HashMap.Raw", "app (EmptyCollection.emptyCollection) (Std.HashMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.HashMap.Raw*)", displayShort := "" },"Std.ExtHashMap", "app (EmptyCollection.emptyCollection) (Std.ExtHashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtHashMap*)", displayShort := "" },"Std.TreeMap", "app (EmptyCollection.emptyCollection) (Std.TreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.TreeMap*)", displayShort := "" },"Std.TreeMap.Raw", "app (EmptyCollection.emptyCollection) (Std.TreeMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.TreeMap.Raw*)", displayShort := "" },"Std.ExtTreeMap", "app (EmptyCollection.emptyCollection) (Std.ExtTreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtTreeMap*)", displayShort := "" },"Std.HashSet", "app (EmptyCollection.emptyCollection) (Std.HashSet*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.HashSet*)", displayShort := "" },"Std.HashSet.Raw", "app (EmptyCollection.emptyCollection) (Std.HashSet.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.HashSet.Raw*)", displayShort := "" },"Std.ExtHashSet", "app (EmptyCollection.emptyCollection) (Std.ExtHashSet*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtHashSet*)", displayShort := "" },"Std.TreeSet", "app (EmptyCollection.emptyCollection) (Std.TreeSet*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.TreeSet*)", displayShort := "" },"Std.TreeSet.Raw", "app (EmptyCollection.emptyCollection) (Std.TreeSet.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.TreeSet.Raw*)", displayShort := "" },"Std.ExtTreeSet", "app (EmptyCollection.emptyCollection) (Std.ExtTreeSet*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (EmptyCollection.emptyCollection) (Std.ExtTreeSet*)", displayShort := "" },]
metadata := {
status := .done
comment := ""
}
def table : AssociationTable.Data .subexpression where
widgetId := "associative-creation-operations"
rows := #[
"2cb3c441-9663-4ce7-9527-0f40fc29925a", "empty", #["Std.DHashMap", "Std.DHashMap.emptyWithCapacity","Std.DHashMap.Raw", "Std.DHashMap.Raw.emptyWithCapacity","Std.ExtDHashMap", "Std.ExtDHashMap.emptyWithCapacity","Std.DTreeMap", "Std.DTreeMap.empty","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.empty","Std.ExtDTreeMap", "Std.ExtDTreeMap.empty","Std.HashMap", "Std.HashMap.emptyWithCapacity","Std.HashMap.Raw", "Std.HashMap.Raw.emptyWithCapacity","Std.ExtHashMap", "Std.ExtHashMap.emptyWithCapacity","Std.TreeMap", "Std.TreeMap.empty","Std.TreeMap.Raw", "Std.TreeMap.Raw.empty","Std.ExtTreeMap", "Std.ExtTreeMap.empty","Std.HashSet", "Std.HashSet.emptyWithCapacity","Std.HashSet.Raw", "Std.HashSet.Raw.emptyWithCapacity","Std.ExtHashSet", "Std.ExtHashSet.emptyWithCapacity","Std.TreeSet", "Std.TreeSet.empty","Std.TreeSet.Raw", "Std.TreeSet.Raw.empty","Std.ExtTreeSet", "Std.ExtTreeSet.empty",],
"7743a485-024d-43b6-bd5f-ebd3182eb94d", "ofList", #["Std.DHashMap", "Std.DHashMap.ofList","Std.DHashMap.Raw", "Std.DHashMap.Raw.ofList","Std.ExtDHashMap", "Std.ExtDHashMap.ofList","Std.DTreeMap", "Std.DTreeMap.ofList","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.ofList","Std.ExtDTreeMap", "Std.ExtDTreeMap.ofList","Std.HashMap", "Std.HashMap.ofList","Std.HashMap.Raw", "Std.HashMap.Raw.ofList","Std.ExtHashMap", "Std.ExtHashMap.ofList","Std.TreeMap", "Std.TreeMap.ofList","Std.TreeMap.Raw", "Std.TreeMap.Raw.ofList","Std.ExtTreeMap", "Std.ExtTreeMap.ofList","Std.HashSet", "Std.HashSet.ofList","Std.HashSet.Raw", "Std.HashSet.Raw.ofList","Std.ExtHashSet", "Std.ExtHashSet.ofList","Std.TreeSet", "Std.TreeSet.ofList","Std.TreeSet.Raw", "Std.TreeSet.Raw.ofList","Std.ExtTreeSet", "Std.ExtTreeSet.ofList",],
"5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d", "emptyCollection", #["Std.DHashMap", "app (EmptyCollection.emptyCollection) (Std.DHashMap*)","Std.DHashMap.Raw", "app (EmptyCollection.emptyCollection) (Std.DHashMap.Raw*)","Std.ExtDHashMap", "app (EmptyCollection.emptyCollection) (Std.ExtDHashMap*)","Std.DTreeMap", "app (EmptyCollection.emptyCollection) (Std.DTreeMap*)","Std.DTreeMap.Raw", "app (EmptyCollection.emptyCollection) (Std.DTreeMap.Raw*)","Std.ExtDTreeMap", "app (EmptyCollection.emptyCollection) (Std.ExtDTreeMap*)","Std.HashMap", "app (EmptyCollection.emptyCollection) (Std.HashMap*)","Std.HashMap.Raw", "app (EmptyCollection.emptyCollection) (Std.HashMap.Raw*)","Std.ExtHashMap", "app (EmptyCollection.emptyCollection) (Std.ExtHashMap*)","Std.TreeMap", "app (EmptyCollection.emptyCollection) (Std.TreeMap*)","Std.TreeMap.Raw", "app (EmptyCollection.emptyCollection) (Std.TreeMap.Raw*)","Std.ExtTreeMap", "app (EmptyCollection.emptyCollection) (Std.ExtTreeMap*)","Std.HashSet", "app (EmptyCollection.emptyCollection) (Std.HashSet*)","Std.HashSet.Raw", "app (EmptyCollection.emptyCollection) (Std.HashSet.Raw*)","Std.ExtHashSet", "app (EmptyCollection.emptyCollection) (Std.ExtHashSet*)","Std.TreeSet", "app (EmptyCollection.emptyCollection) (Std.TreeSet*)","Std.TreeSet.Raw", "app (EmptyCollection.emptyCollection) (Std.TreeSet.Raw*)","Std.ExtTreeSet", "app (EmptyCollection.emptyCollection) (Std.ExtTreeSet*)",],
]
facts := #[
«2cb3c441-9663-4ce7-9527-0f40fc29925a»,
«7743a485-024d-43b6-bd5f-ebd3182eb94d»,
«5ceaa26a-d2cb-4df3-9ac8-b5c11db2ae9d»,
]
def restoreState : RestoreStateM Unit := do
addAssociationTable table

View File

@@ -1,21 +0,0 @@
import Grove.Framework
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated.«associative-modification-operations»
def table : AssociationTable.Data .subexpression where
widgetId := "associative-modification-operations"
rows := #[
]
facts := #[
]
def restoreState : RestoreStateM Unit := do
addAssociationTable table

View File

@@ -1,445 +0,0 @@
import Grove.Framework
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated.«associative-query-operations»
def «01f88623-fa5f-4380-9772-b30f2fec5c94» : AssociationTable.Fact .subexpression where
widgetId := "associative-query-operations"
factId := "01f88623-fa5f-4380-9772-b30f2fec5c94"
rowId := "01f88623-fa5f-4380-9772-b30f2fec5c94"
rowState := #["Std.DHashMap", "Std.DHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.isEmpty,
renderedStatement := "Std.DHashMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.DHashMap α β) : Bool",
isDeprecated := false }),"Std.DHashMap.Raw", "Std.DHashMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.isEmpty,
renderedStatement := "Std.DHashMap.Raw.isEmpty.{u, v} {α : Type u} {β : α → Type v} (m : Std.DHashMap.Raw α β) : Bool",
isDeprecated := false }),"Std.ExtDHashMap", "Std.ExtDHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.isEmpty,
renderedStatement := "Std.ExtDHashMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n [EquivBEq α] [LawfulHashable α] (m : Std.ExtDHashMap α β) : Bool",
isDeprecated := false }),"Std.DTreeMap", "Std.DTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.isEmpty,
renderedStatement := "Std.DTreeMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap α β cmp) : Bool",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.isEmpty,
renderedStatement := "Std.DTreeMap.Raw.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap.Raw α β cmp) : Bool",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.isEmpty,
renderedStatement := "Std.ExtDTreeMap.isEmpty.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.ExtDTreeMap α β cmp) : Bool",
isDeprecated := false }),"Std.HashMap", "Std.HashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.isEmpty,
renderedStatement := "Std.HashMap.isEmpty.{u, v} {α : Type u} {β : Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.HashMap α β) : Bool",
isDeprecated := false }),"Std.HashMap.Raw", "Std.HashMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.Raw.isEmpty,
renderedStatement := "Std.HashMap.Raw.isEmpty.{u, v} {α : Type u} {β : Type v} (m : Std.HashMap.Raw α β) : Bool",
isDeprecated := false }),"Std.ExtHashMap", "Std.ExtHashMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashMap.isEmpty,
renderedStatement := "Std.ExtHashMap.isEmpty.{u, v} {α : Type u} {β : Type v} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α]\n [LawfulHashable α] (m : Std.ExtHashMap α β) : Bool",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.isEmpty,
renderedStatement := "Std.TreeMap.isEmpty.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap α β cmp) : Bool",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.isEmpty,
renderedStatement := "Std.TreeMap.Raw.isEmpty.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap.Raw α β cmp) : Bool",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.isEmpty,
renderedStatement := "Std.ExtTreeMap.isEmpty.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.ExtTreeMap α β cmp) : Bool",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.isEmpty,
renderedStatement := "Std.HashSet.isEmpty.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} (m : Std.HashSet α) : Bool",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.isEmpty,
renderedStatement := "Std.HashSet.Raw.isEmpty.{u} {α : Type u} (m : Std.HashSet.Raw α) : Bool",
isDeprecated := false }),"Std.ExtHashSet", "Std.ExtHashSet.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashSet.isEmpty,
renderedStatement := "Std.ExtHashSet.isEmpty.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α]\n [LawfulHashable α] (m : Std.ExtHashSet α) : Bool",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.isEmpty,
renderedStatement := "Std.TreeSet.isEmpty.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet α cmp) : Bool",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.isEmpty,
renderedStatement := "Std.TreeSet.Raw.isEmpty.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet.Raw α cmp) : Bool",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.isEmpty", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.isEmpty,
renderedStatement := "Std.ExtTreeSet.isEmpty.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.ExtTreeSet α cmp) : Bool",
isDeprecated := false }),]
metadata := {
status := .done
comment := ""
}
def «f084f852-af71-45b6-8ab3-d251a8144f72» : AssociationTable.Fact .subexpression where
widgetId := "associative-query-operations"
factId := "f084f852-af71-45b6-8ab3-d251a8144f72"
rowId := "f084f852-af71-45b6-8ab3-d251a8144f72"
rowState := #["Std.DHashMap", "Std.DHashMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.size,
renderedStatement := "Std.DHashMap.size.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.DHashMap α β) : Nat",
isDeprecated := false }),"Std.DHashMap.Raw", "Std.DHashMap.Raw.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.size,
renderedStatement := "Std.DHashMap.Raw.size.{u, v} {α : Type u} {β : α → Type v} (self : Std.DHashMap.Raw α β) : Nat",
isDeprecated := false }),"Std.ExtDHashMap", "Std.ExtDHashMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.size,
renderedStatement := "Std.ExtDHashMap.size.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n [EquivBEq α] [LawfulHashable α] (m : Std.ExtDHashMap α β) : Nat",
isDeprecated := false }),"Std.DTreeMap", "Std.DTreeMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.size,
renderedStatement := "Std.DTreeMap.size.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap α β cmp) : Nat",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.size,
renderedStatement := "Std.DTreeMap.Raw.size.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap.Raw α β cmp) : Nat",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.size,
renderedStatement := "Std.ExtDTreeMap.size.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.ExtDTreeMap α β cmp) : Nat",
isDeprecated := false }),"Std.HashMap", "Std.HashMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.size,
renderedStatement := "Std.HashMap.size.{u, v} {α : Type u} {β : Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.HashMap α β) : Nat",
isDeprecated := false }),"Std.HashMap.Raw", "Std.HashMap.Raw.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.Raw.size,
renderedStatement := "Std.HashMap.Raw.size.{u, v} {α : Type u} {β : Type v} (m : Std.HashMap.Raw α β) : Nat",
isDeprecated := false }),"Std.ExtHashMap", "Std.ExtHashMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashMap.size,
renderedStatement := "Std.ExtHashMap.size.{u, v} {α : Type u} {β : Type v} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α]\n [LawfulHashable α] (m : Std.ExtHashMap α β) : Nat",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.size,
renderedStatement := "Std.TreeMap.size.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap α β cmp) : Nat",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.size,
renderedStatement := "Std.TreeMap.Raw.size.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap.Raw α β cmp) : Nat",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.size,
renderedStatement := "Std.ExtTreeMap.size.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.ExtTreeMap α β cmp) : Nat",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.size,
renderedStatement := "Std.HashSet.size.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} (m : Std.HashSet α) : Nat",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.size,
renderedStatement := "Std.HashSet.Raw.size.{u} {α : Type u} (m : Std.HashSet.Raw α) : Nat",
isDeprecated := false }),"Std.ExtHashSet", "Std.ExtHashSet.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashSet.size,
renderedStatement := "Std.ExtHashSet.size.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α] [LawfulHashable α]\n (m : Std.ExtHashSet α) : Nat",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.size,
renderedStatement := "Std.TreeSet.size.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet α cmp) : Nat",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.size,
renderedStatement := "Std.TreeSet.Raw.size.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet.Raw α cmp) : Nat",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.size", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.size,
renderedStatement := "Std.ExtTreeSet.size.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.ExtTreeSet α cmp) : Nat",
isDeprecated := false }),]
metadata := {
status := .done
comment := ""
}
def «f4e6fa70-5aed-439d-aaad-5f4ced65bf7b» : AssociationTable.Fact .subexpression where
widgetId := "associative-query-operations"
factId := "f4e6fa70-5aed-439d-aaad-5f4ced65bf7b"
rowId := "f4e6fa70-5aed-439d-aaad-5f4ced65bf7b"
rowState := #["Std.DTreeMap", "Std.DTreeMap.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.any,
renderedStatement := "Std.DTreeMap.any.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap α β cmp) (p : (a : α) → β a → Bool) : Bool",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.any,
renderedStatement := "Std.DTreeMap.Raw.any.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap.Raw α β cmp) (p : (a : α) → β a → Bool) : Bool",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.any,
renderedStatement := "Std.ExtDTreeMap.any.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtDTreeMap α β cmp) (p : (a : α) → β a → Bool) : Bool",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.any,
renderedStatement := "Std.TreeMap.any.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} (t : Std.TreeMap α β cmp)\n (p : α → β → Bool) : Bool",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.any,
renderedStatement := "Std.TreeMap.Raw.any.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap.Raw α β cmp) (p : α → β → Bool) : Bool",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.any,
renderedStatement := "Std.ExtTreeMap.any.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeMap α β cmp) (p : α → β → Bool) : Bool",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.any,
renderedStatement := "Std.HashSet.any.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} (m : Std.HashSet α)\n (p : α → Bool) : Bool",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.any,
renderedStatement := "Std.HashSet.Raw.any.{u} {α : Type u} (m : Std.HashSet.Raw α) (p : α → Bool) : Bool",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.any,
renderedStatement := "Std.TreeSet.any.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet α cmp) (p : α → Bool) :\n Bool",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.any,
renderedStatement := "Std.TreeSet.Raw.any.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet.Raw α cmp)\n (p : α → Bool) : Bool",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.any", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.any,
renderedStatement := "Std.ExtTreeSet.any.{u} {α : Type u} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeSet α cmp) (p : α → Bool) : Bool",
isDeprecated := false }),]
metadata := {
status := .bad
comment := "Missing for some containers"
}
def «c1d181f6-3204-4956-946f-e81619f9feb4» : AssociationTable.Fact .subexpression where
widgetId := "associative-query-operations"
factId := "c1d181f6-3204-4956-946f-e81619f9feb4"
rowId := "c1d181f6-3204-4956-946f-e81619f9feb4"
rowState := #["Std.DTreeMap", "Std.DTreeMap.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.all,
renderedStatement := "Std.DTreeMap.all.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap α β cmp) (p : (a : α) → β a → Bool) : Bool",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.all,
renderedStatement := "Std.DTreeMap.Raw.all.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n (t : Std.DTreeMap.Raw α β cmp) (p : (a : α) → β a → Bool) : Bool",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.all,
renderedStatement := "Std.ExtDTreeMap.all.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtDTreeMap α β cmp) (p : (a : α) → β a → Bool) : Bool",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.all,
renderedStatement := "Std.TreeMap.all.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} (t : Std.TreeMap α β cmp)\n (p : α → β → Bool) : Bool",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.all,
renderedStatement := "Std.TreeMap.Raw.all.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap.Raw α β cmp) (p : α → β → Bool) : Bool",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.all,
renderedStatement := "Std.ExtTreeMap.all.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeMap α β cmp) (p : α → β → Bool) : Bool",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.all,
renderedStatement := "Std.HashSet.all.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} (m : Std.HashSet α)\n (p : α → Bool) : Bool",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.all,
renderedStatement := "Std.HashSet.Raw.all.{u} {α : Type u} (m : Std.HashSet.Raw α) (p : α → Bool) : Bool",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.all,
renderedStatement := "Std.TreeSet.all.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet α cmp) (p : α → Bool) :\n Bool",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.all,
renderedStatement := "Std.TreeSet.Raw.all.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet.Raw α cmp)\n (p : α → Bool) : Bool",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.all", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.all,
renderedStatement := "Std.ExtTreeSet.all.{u} {α : Type u} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeSet α cmp) (p : α → Bool) : Bool",
isDeprecated := false }),]
metadata := {
status := .bad
comment := "Missing for some containers"
}
def «efe57f41-7db7-4303-b3a6-5216a70c43ce» : AssociationTable.Fact .subexpression where
widgetId := "associative-query-operations"
factId := "efe57f41-7db7-4303-b3a6-5216a70c43ce"
rowId := "efe57f41-7db7-4303-b3a6-5216a70c43ce"
rowState := #["Std.DHashMap", "Std.DHashMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.getD,
renderedStatement := "Std.DHashMap.getD.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α} [LawfulBEq α]\n (m : Std.DHashMap α β) (a : α) (fallback : β a) : β a",
isDeprecated := false }),"Std.DHashMap.Raw", "Std.DHashMap.Raw.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.getD,
renderedStatement := "Std.DHashMap.Raw.getD.{u, v} {α : Type u} {β : α → Type v} [BEq α] [Hashable α] [LawfulBEq α]\n (m : Std.DHashMap.Raw α β) (a : α) (fallback : β a) : β a",
isDeprecated := false }),"Std.ExtDHashMap", "Std.ExtDHashMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.getD,
renderedStatement := "Std.ExtDHashMap.getD.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n [LawfulBEq α] (m : Std.ExtDHashMap α β) (a : α) (fallback : β a) : β a",
isDeprecated := false }),"Std.DTreeMap", "Std.DTreeMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.getD,
renderedStatement := "Std.DTreeMap.getD.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n [Std.LawfulEqCmp cmp] (t : Std.DTreeMap α β cmp) (a : α) (fallback : β a) : β a",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.getD,
renderedStatement := "Std.DTreeMap.Raw.getD.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n [Std.LawfulEqCmp cmp] (t : Std.DTreeMap.Raw α β cmp) (a : α) (fallback : β a) : β a",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.getD,
renderedStatement := "Std.ExtDTreeMap.getD.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n [Std.TransCmp cmp] [Std.LawfulEqCmp cmp] (t : Std.ExtDTreeMap α β cmp) (a : α) (fallback : β a) :\n β a",
isDeprecated := false }),"Std.HashMap", "Std.HashMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.getD,
renderedStatement := "Std.HashMap.getD.{u, v} {α : Type u} {β : Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n (m : Std.HashMap α β) (a : α) (fallback : β) : β",
isDeprecated := false }),"Std.HashMap.Raw", "Std.HashMap.Raw.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashMap.Raw.getD,
renderedStatement := "Std.HashMap.Raw.getD.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α] (m : Std.HashMap.Raw α β)\n (a : α) (fallback : β) : β",
isDeprecated := false }),"Std.ExtHashMap", "Std.ExtHashMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashMap.getD,
renderedStatement := "Std.ExtHashMap.getD.{u, v} {α : Type u} {β : Type v} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α]\n [LawfulHashable α] (m : Std.ExtHashMap α β) (a : α) (fallback : β) : β",
isDeprecated := false }),"Std.TreeMap", "Std.TreeMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.getD,
renderedStatement := "Std.TreeMap.getD.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} (t : Std.TreeMap α β cmp)\n (a : α) (fallback : β) : β",
isDeprecated := false }),"Std.TreeMap.Raw", "Std.TreeMap.Raw.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeMap.Raw.getD,
renderedStatement := "Std.TreeMap.Raw.getD.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering}\n (t : Std.TreeMap.Raw α β cmp) (a : α) (fallback : β) : β",
isDeprecated := false }),"Std.ExtTreeMap", "Std.ExtTreeMap.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeMap.getD,
renderedStatement := "Std.ExtTreeMap.getD.{u, v} {α : Type u} {β : Type v} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeMap α β cmp) (a : α) (fallback : β) : β",
isDeprecated := false }),"Std.HashSet", "Std.HashSet.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.getD,
renderedStatement := "Std.HashSet.getD.{u} {α : Type u} [BEq α] [Hashable α] (m : Std.HashSet α) (a fallback : α) : α",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.getD,
renderedStatement := "Std.HashSet.Raw.getD.{u} {α : Type u} [BEq α] [Hashable α] (m : Std.HashSet.Raw α)\n (a fallback : α) : α",
isDeprecated := false }),"Std.ExtHashSet", "Std.ExtHashSet.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashSet.getD,
renderedStatement := "Std.ExtHashSet.getD.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α] [LawfulHashable α]\n (m : Std.ExtHashSet α) (a fallback : α) : α",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.getD,
renderedStatement := "Std.TreeSet.getD.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet α cmp)\n (a fallback : α) : α",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.getD,
renderedStatement := "Std.TreeSet.Raw.getD.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet.Raw α cmp)\n (a fallback : α) : α",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.getD", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.getD,
renderedStatement := "Std.ExtTreeSet.getD.{u} {α : Type u} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeSet α cmp) (a fallback : α) : α",
isDeprecated := false }),]
metadata := {
status := .done
comment := ""
}
def «e23b1119-3b57-433e-a68d-68fd70b9943d» : AssociationTable.Fact .subexpression where
widgetId := "associative-query-operations"
factId := "e23b1119-3b57-433e-a68d-68fd70b9943d"
rowId := "e23b1119-3b57-433e-a68d-68fd70b9943d"
rowState := #["Std.DHashMap", "Std.DHashMap.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.get,
renderedStatement := "Std.DHashMap.get.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α} [LawfulBEq α]\n (m : Std.DHashMap α β) (a : α) (h : a ∈ m) : β a",
isDeprecated := false }),"Std.DHashMap.Raw", "Std.DHashMap.Raw.Const.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DHashMap.Raw.Const.get,
renderedStatement := "Std.DHashMap.Raw.Const.get.{u, v} {α : Type u} {β : Type v} [BEq α] [Hashable α]\n (m : Std.DHashMap.Raw α fun x => β) (a : α) (h : a ∈ m) : β",
isDeprecated := false }),"Std.ExtDHashMap", "Std.ExtDHashMap.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDHashMap.get,
renderedStatement := "Std.ExtDHashMap.get.{u, v} {α : Type u} {β : α → Type v} {x✝ : BEq α} {x✝¹ : Hashable α}\n [LawfulBEq α] (m : Std.ExtDHashMap α β) (a : α) (h : a ∈ m) : β a",
isDeprecated := false }),"Std.DTreeMap", "Std.DTreeMap.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.get,
renderedStatement := "Std.DTreeMap.get.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} [Std.LawfulEqCmp cmp]\n (t : Std.DTreeMap α β cmp) (a : α) (h : a ∈ t) : β a",
isDeprecated := false }),"Std.DTreeMap.Raw", "Std.DTreeMap.Raw.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.DTreeMap.Raw.get,
renderedStatement := "Std.DTreeMap.Raw.get.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering}\n [Std.LawfulEqCmp cmp] (t : Std.DTreeMap.Raw α β cmp) (a : α) (h : a ∈ t) : β a",
isDeprecated := false }),"Std.ExtDTreeMap", "Std.ExtDTreeMap.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtDTreeMap.get,
renderedStatement := "Std.ExtDTreeMap.get.{u, v} {α : Type u} {β : α → Type v} {cmp : αα → Ordering} [Std.TransCmp cmp]\n [Std.LawfulEqCmp cmp] (t : Std.ExtDTreeMap α β cmp) (a : α) (h : a ∈ t) : β a",
isDeprecated := false }),"Std.HashMap", "app (GetElem.getElem) (Std.HashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (GetElem.getElem) (Std.HashMap*)", displayShort := "Std.HashMap[·]" },"Std.HashMap.Raw", "app (GetElem.getElem) (Std.HashMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (GetElem.getElem) (Std.HashMap.Raw*)", displayShort := "Std.HashMap.Raw[·]" },"Std.ExtHashMap", "app (GetElem.getElem) (Std.ExtHashMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (GetElem.getElem) (Std.ExtHashMap*)", displayShort := "Std.ExtHashMap[·]" },"Std.TreeMap", "app (GetElem.getElem) (Std.TreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (GetElem.getElem) (Std.TreeMap*)", displayShort := "Std.TreeMap[·]" },"Std.TreeMap.Raw", "app (GetElem.getElem) (Std.TreeMap.Raw*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (GetElem.getElem) (Std.TreeMap.Raw*)", displayShort := "Std.TreeMap.Raw[·]" },"Std.ExtTreeMap", "app (GetElem.getElem) (Std.ExtTreeMap*)", Grove.Framework.Subexpression.State.predicate
{ key := "app (GetElem.getElem) (Std.ExtTreeMap*)", displayShort := "Std.ExtTreeMap[·]" },"Std.HashSet", "Std.HashSet.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.get,
renderedStatement := "Std.HashSet.get.{u} {α : Type u} [BEq α] [Hashable α] (m : Std.HashSet α) (a : α) (h : a ∈ m) : α",
isDeprecated := false }),"Std.HashSet.Raw", "Std.HashSet.Raw.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.HashSet.Raw.get,
renderedStatement := "Std.HashSet.Raw.get.{u} {α : Type u} [BEq α] [Hashable α] (m : Std.HashSet.Raw α) (a : α)\n (h : a ∈ m) : α",
isDeprecated := false }),"Std.ExtHashSet", "Std.ExtHashSet.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtHashSet.get,
renderedStatement := "Std.ExtHashSet.get.{u} {α : Type u} {x✝ : BEq α} {x✝¹ : Hashable α} [EquivBEq α] [LawfulHashable α]\n (m : Std.ExtHashSet α) (a : α) (h : a ∈ m) : α",
isDeprecated := false }),"Std.TreeSet", "Std.TreeSet.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.get,
renderedStatement := "Std.TreeSet.get.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet α cmp) (a : α)\n (h : a ∈ t) : α",
isDeprecated := false }),"Std.TreeSet.Raw", "Std.TreeSet.Raw.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.TreeSet.Raw.get,
renderedStatement := "Std.TreeSet.Raw.get.{u} {α : Type u} {cmp : αα → Ordering} (t : Std.TreeSet.Raw α cmp) (a : α)\n (h : a ∈ t) : α",
isDeprecated := false }),"Std.ExtTreeSet", "Std.ExtTreeSet.get", Grove.Framework.Subexpression.State.declaration
(Grove.Framework.Declaration.def
{ name := `Std.ExtTreeSet.get,
renderedStatement := "Std.ExtTreeSet.get.{u} {α : Type u} {cmp : αα → Ordering} [Std.TransCmp cmp]\n (t : Std.ExtTreeSet α cmp) (a : α) (h : a ∈ t) : α",
isDeprecated := false }),]
metadata := {
status := .bad
comment := "Should *Set have GetElem?"
}
def table : AssociationTable.Data .subexpression where
widgetId := "associative-query-operations"
rows := #[
"01f88623-fa5f-4380-9772-b30f2fec5c94", "isEmpty", #["Std.DHashMap", "Std.DHashMap.isEmpty","Std.DHashMap.Raw", "Std.DHashMap.Raw.isEmpty","Std.ExtDHashMap", "Std.ExtDHashMap.isEmpty","Std.DTreeMap", "Std.DTreeMap.isEmpty","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.isEmpty","Std.ExtDTreeMap", "Std.ExtDTreeMap.isEmpty","Std.HashMap", "Std.HashMap.isEmpty","Std.HashMap.Raw", "Std.HashMap.Raw.isEmpty","Std.ExtHashMap", "Std.ExtHashMap.isEmpty","Std.TreeMap", "Std.TreeMap.isEmpty","Std.TreeMap.Raw", "Std.TreeMap.Raw.isEmpty","Std.ExtTreeMap", "Std.ExtTreeMap.isEmpty","Std.HashSet", "Std.HashSet.isEmpty","Std.HashSet.Raw", "Std.HashSet.Raw.isEmpty","Std.ExtHashSet", "Std.ExtHashSet.isEmpty","Std.TreeSet", "Std.TreeSet.isEmpty","Std.TreeSet.Raw", "Std.TreeSet.Raw.isEmpty","Std.ExtTreeSet", "Std.ExtTreeSet.isEmpty",],
"f084f852-af71-45b6-8ab3-d251a8144f72", "size", #["Std.DHashMap", "Std.DHashMap.size","Std.DHashMap.Raw", "Std.DHashMap.Raw.size","Std.ExtDHashMap", "Std.ExtDHashMap.size","Std.DTreeMap", "Std.DTreeMap.size","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.size","Std.ExtDTreeMap", "Std.ExtDTreeMap.size","Std.HashMap", "Std.HashMap.size","Std.HashMap.Raw", "Std.HashMap.Raw.size","Std.ExtHashMap", "Std.ExtHashMap.size","Std.TreeMap", "Std.TreeMap.size","Std.TreeMap.Raw", "Std.TreeMap.Raw.size","Std.ExtTreeMap", "Std.ExtTreeMap.size","Std.HashSet", "Std.HashSet.size","Std.HashSet.Raw", "Std.HashSet.Raw.size","Std.ExtHashSet", "Std.ExtHashSet.size","Std.TreeSet", "Std.TreeSet.size","Std.TreeSet.Raw", "Std.TreeSet.Raw.size","Std.ExtTreeSet", "Std.ExtTreeSet.size",],
"f4e6fa70-5aed-439d-aaad-5f4ced65bf7b", "any", #["Std.DTreeMap", "Std.DTreeMap.any","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.any","Std.ExtDTreeMap", "Std.ExtDTreeMap.any","Std.TreeMap", "Std.TreeMap.any","Std.TreeMap.Raw", "Std.TreeMap.Raw.any","Std.ExtTreeMap", "Std.ExtTreeMap.any","Std.HashSet", "Std.HashSet.any","Std.HashSet.Raw", "Std.HashSet.Raw.any","Std.TreeSet", "Std.TreeSet.any","Std.TreeSet.Raw", "Std.TreeSet.Raw.any","Std.ExtTreeSet", "Std.ExtTreeSet.any",],
"c1d181f6-3204-4956-946f-e81619f9feb4", "all", #["Std.DTreeMap", "Std.DTreeMap.all","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.all","Std.ExtDTreeMap", "Std.ExtDTreeMap.all","Std.TreeMap", "Std.TreeMap.all","Std.TreeMap.Raw", "Std.TreeMap.Raw.all","Std.ExtTreeMap", "Std.ExtTreeMap.all","Std.HashSet", "Std.HashSet.all","Std.HashSet.Raw", "Std.HashSet.Raw.all","Std.TreeSet", "Std.TreeSet.all","Std.TreeSet.Raw", "Std.TreeSet.Raw.all","Std.ExtTreeSet", "Std.ExtTreeSet.all",],
"efe57f41-7db7-4303-b3a6-5216a70c43ce", "getD", #["Std.DHashMap", "Std.DHashMap.getD","Std.DHashMap.Raw", "Std.DHashMap.Raw.getD","Std.ExtDHashMap", "Std.ExtDHashMap.getD","Std.DTreeMap", "Std.DTreeMap.getD","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.getD","Std.ExtDTreeMap", "Std.ExtDTreeMap.getD","Std.HashMap", "Std.HashMap.getD","Std.HashMap.Raw", "Std.HashMap.Raw.getD","Std.ExtHashMap", "Std.ExtHashMap.getD","Std.TreeMap", "Std.TreeMap.getD","Std.TreeMap.Raw", "Std.TreeMap.Raw.getD","Std.ExtTreeMap", "Std.ExtTreeMap.getD","Std.HashSet", "Std.HashSet.getD","Std.HashSet.Raw", "Std.HashSet.Raw.getD","Std.ExtHashSet", "Std.ExtHashSet.getD","Std.TreeSet", "Std.TreeSet.getD","Std.TreeSet.Raw", "Std.TreeSet.Raw.getD","Std.ExtTreeSet", "Std.ExtTreeSet.getD",],
"e23b1119-3b57-433e-a68d-68fd70b9943d", "getElem", #["Std.DHashMap", "Std.DHashMap.get","Std.DHashMap.Raw", "Std.DHashMap.Raw.Const.get","Std.ExtDHashMap", "Std.ExtDHashMap.get","Std.DTreeMap", "Std.DTreeMap.get","Std.DTreeMap.Raw", "Std.DTreeMap.Raw.get","Std.ExtDTreeMap", "Std.ExtDTreeMap.get","Std.HashMap", "app (GetElem.getElem) (Std.HashMap*)","Std.HashMap.Raw", "app (GetElem.getElem) (Std.HashMap.Raw*)","Std.ExtHashMap", "app (GetElem.getElem) (Std.ExtHashMap*)","Std.TreeMap", "app (GetElem.getElem) (Std.TreeMap*)","Std.TreeMap.Raw", "app (GetElem.getElem) (Std.TreeMap.Raw*)","Std.ExtTreeMap", "app (GetElem.getElem) (Std.ExtTreeMap*)","Std.HashSet", "Std.HashSet.get","Std.HashSet.Raw", "Std.HashSet.Raw.get","Std.ExtHashSet", "Std.ExtHashSet.get","Std.TreeSet", "Std.TreeSet.get","Std.TreeSet.Raw", "Std.TreeSet.Raw.get","Std.ExtTreeSet", "Std.ExtTreeSet.get",],
]
facts := #[
«01f88623-fa5f-4380-9772-b30f2fec5c94»,
«f084f852-af71-45b6-8ab3-d251a8144f72»,
«f4e6fa70-5aed-439d-aaad-5f4ced65bf7b»,
«c1d181f6-3204-4956-946f-e81619f9feb4»,
«efe57f41-7db7-4303-b3a6-5216a70c43ce»,
«e23b1119-3b57-433e-a68d-68fd70b9943d»,
]
def restoreState : RestoreStateM Unit := do
addAssociationTable table

View File

@@ -1,459 +0,0 @@
import Grove.Framework
/-
This file is autogenerated by grove. You can manually edit it, for example to resolve merge
conflicts, but be careful.
-/
open Grove.Framework Widget
namespace GroveStdlib.Generated.«slice-producing»
def «c8a13d6d-7ed6-4cd1-a386-23e2d55ce6f7» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "c8a13d6d-7ed6-4cd1-a386-23e2d55ce6f7"
rowId := "c8a13d6d-7ed6-4cd1-a386-23e2d55ce6f7"
rowState := #["String", "String.slice", Declaration.def {
name := `String.slice
renderedStatement := "String.slice (s : String) (startInclusive endExclusive : s.Pos)\n (h : startInclusive ≤ endExclusive) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.slice", Declaration.def {
name := `String.Slice.slice
renderedStatement := "String.Slice.slice (s : String.Slice) (newStart newEnd : s.Pos) (h : newStart ≤ newEnd) :\n String.Slice"
isDeprecated := false
}
,"string-pos-forwards", "String.Pos.slice", Declaration.def {
name := `String.Pos.slice
renderedStatement := "String.Pos.slice {s : String} (pos p₀ p₁ : s.Pos) (h₁ : p₀ ≤ pos) (h₂ : pos ≤ p₁) :\n (s.slice p₀ p₁ ⋯).Pos"
isDeprecated := false
}
,"string-pos-backwards", "String.Pos.ofSlice", Declaration.def {
name := `String.Pos.ofSlice
renderedStatement := "String.Pos.ofSlice {s : String} {p₀ p₁ : s.Pos} {h : p₀ ≤ p₁} (pos : (s.slice p₀ p₁ h).Pos) : s.Pos"
isDeprecated := false
}
,"string-slice-pos-forwards", "String.Slice.Pos.slice", Declaration.def {
name := `String.Slice.Pos.slice
renderedStatement := "String.Slice.Pos.slice {s : String.Slice} (pos p₀ p₁ : s.Pos) (h₁ : p₀ ≤ pos) (h₂ : pos ≤ p₁) :\n (s.slice p₀ p₁ ⋯).Pos"
isDeprecated := false
}
,"string-slice-pos-backwards", "String.Slice.Pos.ofSlice", Declaration.def {
name := `String.Slice.Pos.ofSlice
renderedStatement := "String.Slice.Pos.ofSlice {s : String.Slice} {p₀ p₁ : s.Pos} {h : p₀ ≤ p₁}\n (pos : (s.slice p₀ p₁ h).Pos) : s.Pos"
isDeprecated := false
}
,"string-pos-noproof", "String.Pos.sliceOrPanic", Declaration.def {
name := `String.Pos.sliceOrPanic
renderedStatement := "String.Pos.sliceOrPanic {s : String} (pos p₀ p₁ : s.Pos) {h : p₀ ≤ p₁} : (s.slice p₀ p₁ h).Pos"
isDeprecated := false
}
,"string-slice-pos-noproof", "String.Slice.Pos.sliceOrPanic", Declaration.def {
name := `String.Slice.Pos.sliceOrPanic
renderedStatement := "String.Slice.Pos.sliceOrPanic {s : String.Slice} (pos p₀ p₁ : s.Pos) {h : p₀ ≤ p₁} :\n (s.slice p₀ p₁ h).Pos"
isDeprecated := false
}
,]
metadata := {
status := .done
comment := ""
}
def «21b4fdfd-f8b3-44f5-a59e-57f1dc1d6819» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "21b4fdfd-f8b3-44f5-a59e-57f1dc1d6819"
rowId := "21b4fdfd-f8b3-44f5-a59e-57f1dc1d6819"
rowState := #["String", "String.slice?", Declaration.def {
name := `String.slice?
renderedStatement := "String.slice? (s : String) (startInclusive endExclusive : s.Pos) : Option String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.slice?", Declaration.def {
name := `String.Slice.slice?
renderedStatement := "String.Slice.slice? (s : String.Slice) (newStart newEnd : s.Pos) : Option String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .postponed
comment := "Would be good to have better support"
}
def «6f2b6ecb-2f0c-4e45-9da3-eb7f2e15eff0» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "6f2b6ecb-2f0c-4e45-9da3-eb7f2e15eff0"
rowId := "6f2b6ecb-2f0c-4e45-9da3-eb7f2e15eff0"
rowState := #["String", "String.slice!", Declaration.def {
name := `String.slice!
renderedStatement := "String.slice! (s : String) (p₁ p₂ : s.Pos) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.slice!", Declaration.def {
name := `String.Slice.slice!
renderedStatement := "String.Slice.slice! (s : String.Slice) (newStart newEnd : s.Pos) : String.Slice"
isDeprecated := false
}
,"string-pos-forwards", "String.Pos.slice!", Declaration.def {
name := `String.Pos.slice!
renderedStatement := "String.Pos.slice! {s : String} (pos p₀ p₁ : s.Pos) : (s.slice! p₀ p₁).Pos"
isDeprecated := false
}
,"string-pos-backwards", "String.Pos.ofSlice!", Declaration.def {
name := `String.Pos.ofSlice!
renderedStatement := "String.Pos.ofSlice! {s : String} {p₀ p₁ : s.Pos} (pos : (s.slice! p₀ p₁).Pos) : s.Pos"
isDeprecated := false
}
,"string-slice-pos-forwards", "String.Slice.Pos.slice!", Declaration.def {
name := `String.Slice.Pos.slice!
renderedStatement := "String.Slice.Pos.slice! {s : String.Slice} (pos p₀ p₁ : s.Pos) : (s.slice! p₀ p₁).Pos"
isDeprecated := false
}
,"string-slice-pos-backwards", "String.Slice.Pos.ofSlice!", Declaration.def {
name := `String.Slice.Pos.ofSlice!
renderedStatement := "String.Slice.Pos.ofSlice! {s : String.Slice} {p₀ p₁ : s.Pos} (pos : (s.slice! p₀ p₁).Pos) : s.Pos"
isDeprecated := false
}
,]
metadata := {
status := .done
comment := ""
}
def «a3bdf66d-bc11-4019-aee9-2f1c1701de52» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "a3bdf66d-bc11-4019-aee9-2f1c1701de52"
rowId := "a3bdf66d-bc11-4019-aee9-2f1c1701de52"
rowState := #["String", "String.trimAsciiStart", Declaration.def {
name := `String.trimAsciiStart
renderedStatement := "String.trimAsciiStart (s : String) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.trimAsciiStart", Declaration.def {
name := `String.Slice.trimAsciiStart
renderedStatement := "String.Slice.trimAsciiStart (s : String.Slice) : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing `of` version at least"
}
def «f12b2730-7a4d-465c-8a6d-9d051c300fd5» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "f12b2730-7a4d-465c-8a6d-9d051c300fd5"
rowId := "f12b2730-7a4d-465c-8a6d-9d051c300fd5"
rowState := #["String", "String.trimAsciiEnd", Declaration.def {
name := `String.trimAsciiEnd
renderedStatement := "String.trimAsciiEnd (s : String) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.trimAsciiEnd", Declaration.def {
name := `String.Slice.trimAsciiEnd
renderedStatement := "String.Slice.trimAsciiEnd (s : String.Slice) : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing `of` version at least"
}
def «32307b55-d6d1-4756-a947-dbe4dfde573c» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "32307b55-d6d1-4756-a947-dbe4dfde573c"
rowId := "32307b55-d6d1-4756-a947-dbe4dfde573c"
rowState := #["String", "String.trimAscii", Declaration.def {
name := `String.trimAscii
renderedStatement := "String.trimAscii (s : String) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.trimAscii", Declaration.def {
name := `String.Slice.trimAscii
renderedStatement := "String.Slice.trimAscii (s : String.Slice) : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing `of` version at least\n"
}
def «dce95a38-f55a-4d6a-ae79-078ffe4b5c15» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "dce95a38-f55a-4d6a-ae79-078ffe4b5c15"
rowId := "dce95a38-f55a-4d6a-ae79-078ffe4b5c15"
rowState := #["String", "String.toSlice", Declaration.def {
name := `String.toSlice
renderedStatement := "String.toSlice (s : String) : String.Slice"
isDeprecated := false
}
,"string-pos-forwards", "String.Pos.toSlice", Declaration.def {
name := `String.Pos.toSlice
renderedStatement := "String.Pos.toSlice {s : String} (pos : s.Pos) : s.toSlice.Pos"
isDeprecated := false
}
,"string-pos-backwards", "String.Pos.ofToSlice", Declaration.def {
name := `String.Pos.ofToSlice
renderedStatement := "String.Pos.ofToSlice {s : String} (pos : s.toSlice.Pos) : s.Pos"
isDeprecated := false
}
,]
metadata := {
status := .done
comment := ""
}
def «005a3f30-5dab-493f-b168-32c36a2bdf7c» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "005a3f30-5dab-493f-b168-32c36a2bdf7c"
rowId := "005a3f30-5dab-493f-b168-32c36a2bdf7c"
rowState := #["String.Slice", "String.Slice.str", Declaration.def {
name := `String.Slice.str
renderedStatement := "String.Slice.str (self : String.Slice) : String"
isDeprecated := false
}
,"string-slice-pos-forwards", "String.Slice.Pos.str", Declaration.def {
name := `String.Slice.Pos.str
renderedStatement := "String.Slice.Pos.str {s : String.Slice} (pos : s.Pos) : s.str.Pos"
isDeprecated := false
}
,"string-slice-pos-backwards", "String.Slice.Pos.ofStr", Declaration.def {
name := `String.Slice.Pos.ofStr
renderedStatement := "String.Slice.Pos.ofStr {s : String.Slice} (pos : s.str.Pos) (h₁ : s.startInclusive ≤ pos)\n (h₂ : pos ≤ s.endExclusive) : s.Pos"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing `no proof` version\n"
}
def «5f1a154c-ae2f-43a1-9409-2ce95b163ef3» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "5f1a154c-ae2f-43a1-9409-2ce95b163ef3"
rowId := "5f1a154c-ae2f-43a1-9409-2ce95b163ef3"
rowState := #["String", "String.drop", Declaration.def {
name := `String.drop
renderedStatement := "String.drop (s : String) (n : Nat) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.drop", Declaration.def {
name := `String.Slice.drop
renderedStatement := "String.Slice.drop (s : String.Slice) (n : Nat) : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «179518d1-ad07-4b2b-8ffe-3b7616e4c4ab» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "179518d1-ad07-4b2b-8ffe-3b7616e4c4ab"
rowId := "179518d1-ad07-4b2b-8ffe-3b7616e4c4ab"
rowState := #["String", "String.take", Declaration.def {
name := `String.take
renderedStatement := "String.take (s : String) (n : Nat) : String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.take", Declaration.def {
name := `String.Slice.take
renderedStatement := "String.Slice.take (s : String.Slice) (n : Nat) : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «55c587fd-a7a8-4633-a4ae-e2c4e768ad28» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "55c587fd-a7a8-4633-a4ae-e2c4e768ad28"
rowId := "55c587fd-a7a8-4633-a4ae-e2c4e768ad28"
rowState := #["String", "String.dropWhile", Declaration.def {
name := `String.dropWhile
renderedStatement := "String.dropWhile {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.ForwardPattern pat] :\n String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.dropWhile", Declaration.def {
name := `String.Slice.dropWhile
renderedStatement := "String.Slice.dropWhile {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.ForwardPattern pat] : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «d4444684-4279-4400-9be2-561a7cdb32c1» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "d4444684-4279-4400-9be2-561a7cdb32c1"
rowId := "d4444684-4279-4400-9be2-561a7cdb32c1"
rowState := #["String", "String.takeWhile", Declaration.def {
name := `String.takeWhile
renderedStatement := "String.takeWhile {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.ForwardPattern pat] :\n String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.takeWhile", Declaration.def {
name := `String.Slice.takeWhile
renderedStatement := "String.Slice.takeWhile {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.ForwardPattern pat] : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «1c9e6689-65a0-4d4b-b001-256e83917d98» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "1c9e6689-65a0-4d4b-b001-256e83917d98"
rowId := "1c9e6689-65a0-4d4b-b001-256e83917d98"
rowState := #["String", "String.dropEndWhile", Declaration.def {
name := `String.dropEndWhile
renderedStatement := "String.dropEndWhile {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.BackwardPattern pat] :\n String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.dropEndWhile", Declaration.def {
name := `String.Slice.dropEndWhile
renderedStatement := "String.Slice.dropEndWhile {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.BackwardPattern pat] : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «b836052b-3470-4a8e-8989-6951c898de37» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "b836052b-3470-4a8e-8989-6951c898de37"
rowId := "b836052b-3470-4a8e-8989-6951c898de37"
rowState := #["String", "String.takeEndWhile", Declaration.def {
name := `String.takeEndWhile
renderedStatement := "String.takeEndWhile {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.BackwardPattern pat] :\n String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.takeEndWhile", Declaration.def {
name := `String.Slice.takeEndWhile
renderedStatement := "String.Slice.takeEndWhile {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.BackwardPattern pat] : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «5aa777d8-9642-43d8-9e20-30400fb8bb9d» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "5aa777d8-9642-43d8-9e20-30400fb8bb9d"
rowId := "5aa777d8-9642-43d8-9e20-30400fb8bb9d"
rowState := #["String", "String.dropPrefix", Declaration.def {
name := `String.dropPrefix
renderedStatement := "String.dropPrefix {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.ForwardPattern pat] :\n String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.dropPrefix", Declaration.def {
name := `String.Slice.dropPrefix
renderedStatement := "String.Slice.dropPrefix {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.ForwardPattern pat] : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «80e3869d-fcfe-459d-8433-fe221f7b3c7a» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "80e3869d-fcfe-459d-8433-fe221f7b3c7a"
rowId := "80e3869d-fcfe-459d-8433-fe221f7b3c7a"
rowState := #["String", "String.dropSuffix", Declaration.def {
name := `String.dropSuffix
renderedStatement := "String.dropSuffix {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.BackwardPattern pat] :\n String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.dropSuffix", Declaration.def {
name := `String.Slice.dropSuffix
renderedStatement := "String.Slice.dropSuffix {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.BackwardPattern pat] : String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .bad
comment := "Missing position transformations"
}
def «4feda3e0-903b-4d52-b34e-0af70f7866e0» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "4feda3e0-903b-4d52-b34e-0af70f7866e0"
rowId := "4feda3e0-903b-4d52-b34e-0af70f7866e0"
rowState := #["String", "String.dropPrefix?", Declaration.def {
name := `String.dropPrefix?
renderedStatement := "String.dropPrefix? {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.ForwardPattern pat] :\n Option String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.dropPrefix?", Declaration.def {
name := `String.Slice.dropPrefix?
renderedStatement := "String.Slice.dropPrefix? {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.ForwardPattern pat] : Option String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .postponed
comment := "Missing position transformations"
}
def «45ca44c8-fbd5-4400-8297-a60778f302b0» : AssociationTable.Fact .declaration where
widgetId := "slice-producing"
factId := "45ca44c8-fbd5-4400-8297-a60778f302b0"
rowId := "45ca44c8-fbd5-4400-8297-a60778f302b0"
rowState := #["String", "String.dropSuffix?", Declaration.def {
name := `String.dropSuffix?
renderedStatement := "String.dropSuffix? {ρ : Type} (s : String) (pat : ρ) [String.Slice.Pattern.BackwardPattern pat] :\n Option String.Slice"
isDeprecated := false
}
,"String.Slice", "String.Slice.dropSuffix?", Declaration.def {
name := `String.Slice.dropSuffix?
renderedStatement := "String.Slice.dropSuffix? {ρ : Type} (s : String.Slice) (pat : ρ)\n [String.Slice.Pattern.BackwardPattern pat] : Option String.Slice"
isDeprecated := false
}
,]
metadata := {
status := .postponed
comment := "Missing position transformations"
}
def table : AssociationTable.Data .declaration where
widgetId := "slice-producing"
rows := #[
"c8a13d6d-7ed6-4cd1-a386-23e2d55ce6f7", "slice", #["String", "String.slice","String.Slice", "String.Slice.slice","string-pos-forwards", "String.Pos.slice","string-pos-backwards", "String.Pos.ofSlice","string-slice-pos-forwards", "String.Slice.Pos.slice","string-slice-pos-backwards", "String.Slice.Pos.ofSlice","string-pos-noproof", "String.Pos.sliceOrPanic","string-slice-pos-noproof", "String.Slice.Pos.sliceOrPanic",],
"21b4fdfd-f8b3-44f5-a59e-57f1dc1d6819", "slice?", #["String", "String.slice?","String.Slice", "String.Slice.slice?",],
"6f2b6ecb-2f0c-4e45-9da3-eb7f2e15eff0", "slice!", #["String", "String.slice!","String.Slice", "String.Slice.slice!","string-pos-forwards", "String.Pos.slice!","string-pos-backwards", "String.Pos.ofSlice!","string-slice-pos-forwards", "String.Slice.Pos.slice!","string-slice-pos-backwards", "String.Slice.Pos.ofSlice!",],
"a3bdf66d-bc11-4019-aee9-2f1c1701de52", "trimAsciiStart", #["String", "String.trimAsciiStart","String.Slice", "String.Slice.trimAsciiStart",],
"f12b2730-7a4d-465c-8a6d-9d051c300fd5", "trimAsciiEnd", #["String", "String.trimAsciiEnd","String.Slice", "String.Slice.trimAsciiEnd",],
"32307b55-d6d1-4756-a947-dbe4dfde573c", "trimAscii", #["String", "String.trimAscii","String.Slice", "String.Slice.trimAscii",],
"dce95a38-f55a-4d6a-ae79-078ffe4b5c15", "toSlice", #["String", "String.toSlice","string-pos-forwards", "String.Pos.toSlice","string-pos-backwards", "String.Pos.ofToSlice",],
"005a3f30-5dab-493f-b168-32c36a2bdf7c", "str", #["String.Slice", "String.Slice.str","string-slice-pos-forwards", "String.Slice.Pos.str","string-slice-pos-backwards", "String.Slice.Pos.ofStr",],
"5f1a154c-ae2f-43a1-9409-2ce95b163ef3", "drop", #["String", "String.drop","String.Slice", "String.Slice.drop",],
"179518d1-ad07-4b2b-8ffe-3b7616e4c4ab", "take", #["String", "String.take","String.Slice", "String.Slice.take",],
"55c587fd-a7a8-4633-a4ae-e2c4e768ad28", "dropWhile", #["String", "String.dropWhile","String.Slice", "String.Slice.dropWhile",],
"d4444684-4279-4400-9be2-561a7cdb32c1", "takeWhile", #["String", "String.takeWhile","String.Slice", "String.Slice.takeWhile",],
"1c9e6689-65a0-4d4b-b001-256e83917d98", "dropEndWhile", #["String", "String.dropEndWhile","String.Slice", "String.Slice.dropEndWhile",],
"b836052b-3470-4a8e-8989-6951c898de37", "takeEndWhile", #["String", "String.takeEndWhile","String.Slice", "String.Slice.takeEndWhile",],
"5aa777d8-9642-43d8-9e20-30400fb8bb9d", "dropPrefix", #["String", "String.dropPrefix","String.Slice", "String.Slice.dropPrefix",],
"80e3869d-fcfe-459d-8433-fe221f7b3c7a", "dropSuffix", #["String", "String.dropSuffix","String.Slice", "String.Slice.dropSuffix",],
"4feda3e0-903b-4d52-b34e-0af70f7866e0", "dropPrefix?", #["String", "String.dropPrefix?","String.Slice", "String.Slice.dropPrefix?",],
"45ca44c8-fbd5-4400-8297-a60778f302b0", "dropSuffix?", #["String", "String.dropSuffix?","String.Slice", "String.Slice.dropSuffix?",],
]
facts := #[
«c8a13d6d-7ed6-4cd1-a386-23e2d55ce6f7»,
«21b4fdfd-f8b3-44f5-a59e-57f1dc1d6819»,
«6f2b6ecb-2f0c-4e45-9da3-eb7f2e15eff0»,
«a3bdf66d-bc11-4019-aee9-2f1c1701de52»,
«f12b2730-7a4d-465c-8a6d-9d051c300fd5»,
«32307b55-d6d1-4756-a947-dbe4dfde573c»,
«dce95a38-f55a-4d6a-ae79-078ffe4b5c15»,
«005a3f30-5dab-493f-b168-32c36a2bdf7c»,
«5f1a154c-ae2f-43a1-9409-2ce95b163ef3»,
«179518d1-ad07-4b2b-8ffe-3b7616e4c4ab»,
«55c587fd-a7a8-4633-a4ae-e2c4e768ad28»,
«d4444684-4279-4400-9be2-561a7cdb32c1»,
«1c9e6689-65a0-4d4b-b001-256e83917d98»,
«b836052b-3470-4a8e-8989-6951c898de37»,
«5aa777d8-9642-43d8-9e20-30400fb8bb9d»,
«80e3869d-fcfe-459d-8433-fe221f7b3c7a»,
«4feda3e0-903b-4d52-b34e-0af70f7866e0»,
«45ca44c8-fbd5-4400-8297-a60778f302b0»,
]
def restoreState : RestoreStateM Unit := do
addAssociationTable table

View File

@@ -1,31 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import GroveStdlib.Std.CoreTypesAndOperations
import GroveStdlib.Std.LanguageConstructs
import GroveStdlib.Std.Libraries
import GroveStdlib.Std.OperatingSystemAbstractions
open Grove.Framework Widget
namespace GroveStdlib
namespace Std
def introduction : Node :=
.text "introduction", "Welcome to the interactive Lean standard library outline!"
end Std
def std : Node :=
.section "stdlib" "The Lean standard library" #[
Std.introduction,
Std.coreTypesAndOperations,
Std.languageConstructs,
Std.libraries,
Std.operatingSystemAbstractions
]
end GroveStdlib

View File

@@ -1,28 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
import GroveStdlib.Std.CoreTypesAndOperations.BasicTypes
import GroveStdlib.Std.CoreTypesAndOperations.Containers
import GroveStdlib.Std.CoreTypesAndOperations.Numbers
import GroveStdlib.Std.CoreTypesAndOperations.StringsAndFormatting
open Grove.Framework Widget
namespace GroveStdlib.Std
namespace CoreTypesAndOperations
end CoreTypesAndOperations
def coreTypesAndOperations : Node :=
.section "core-types-and-operations" "Core types and operations" #[
CoreTypesAndOperations.basicTypes,
CoreTypesAndOperations.containers,
CoreTypesAndOperations.numbers,
CoreTypesAndOperations.stringsAndFormatting
]
end GroveStdlib.Std

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.CoreTypesAndOperations
namespace BasicTypes
end BasicTypes
def basicTypes : Node :=
.section "basic-types" "Basic types" #[]
end GroveStdlib.Std.CoreTypesAndOperations

View File

@@ -1,110 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.CoreTypesAndOperations
namespace Containers
namespace SequentialContainers
end SequentialContainers
def sequentialContainers : Node :=
.section "sequential-containers" "Sequential containers" #[]
namespace AssociativeContainers
def associativeContainers : List Lean.Name :=
[`Std.DHashMap, `Std.DHashMap.Raw, `Std.ExtDHashMap, `Std.DTreeMap, `Std.DTreeMap.Raw, `Std.ExtDTreeMap, `Std.HashMap,
`Std.HashMap.Raw, `Std.ExtHashMap, `Std.TreeMap, `Std.TreeMap.Raw, `Std.ExtTreeMap, `Std.HashSet, `Std.HashSet.Raw, `Std.ExtHashSet,
`Std.TreeSet, `Std.TreeSet.Raw, `Std.ExtTreeSet]
def associativeQueryOperations : AssociationTable .subexpression associativeContainers where
id := "associative-query-operations"
title := "Associative query operations"
description := "Operations that take as input an associative container and return a 'single' piece of information (e.g., `GetElem` or `isEmpty`, but not `toList`)."
dataSources n :=
(DataSource.definitionsInNamespace n)
|>.map Subexpression.declaration
|>.or (DataSource.getElem n)
def associativeCreationOperations : AssociationTable .subexpression associativeContainers where
id := "associative-creation-operations"
title := "Associative creation operations"
description := "Operations that create a new associative container"
dataSources n :=
(DataSource.definitionsInNamespace n)
|>.map Subexpression.declaration
|>.or (DataSource.emptyCollection n)
def associativeModificationOperations : AssociationTable .subexpression associativeContainers where
id := "associative-modification-operations"
title := "Associative modification operations"
description := "Operations that both accept and return an associative container"
dataSources n :=
(DataSource.definitionsInNamespace n)
|>.map Subexpression.declaration
def associativeCreateThenQuery : Table .subexpression .subexpression .declaration associativeContainers where
id := "associative-create-then-query"
title := "Associative create then query"
description := "Lemmas that say what happens when creating a new associative container and then immediately querying from it"
rowsFrom := .table associativeCreationOperations
columnsFrom := .table associativeQueryOperations
cellData := .classic _ { relevantNamespaces := associativeContainers }
def allOperationsCovered : Assertion where
widgetId := "associative-all-operations-covered"
title := "All operations on associative containers covered"
description := "All operations on an associative container should appear in at least one of the tables"
check := do
let allValuesArray : Array String #[associativeQueryOperations, associativeCreationOperations, associativeModificationOperations].flatMapM valuesInAssociationTable
let allValues : Std.HashSet String := Std.HashSet.ofArray allValuesArray
let env Lean.getEnv
let mut numBad := 0
for (n, _) in env.constants do
if associativeContainers.any (fun namesp => namesp.isPrefixOf n) then
if !n.toString allValues then
numBad := numBad + 1
return #[{
assertionId := "all-covered"
description := "All operations should be covered"
passed := numBad == 0
message := if numBad = 0 then "All operations were covered" else s!"There were {numBad} operations that were not covered."
}]
end AssociativeContainers
open AssociativeContainers in
def associativeContainers : Node :=
.section "associative-containers" "Associative containers" #[
.associationTable associativeQueryOperations,
.associationTable associativeCreationOperations,
.associationTable associativeModificationOperations,
.table associativeCreateThenQuery,
.assertion allOperationsCovered
]
namespace PersistentDataStructures
end PersistentDataStructures
def persistentDataStructures : Node :=
.section "persistent-data-structures" "Persistent data structures" #[]
end Containers
def containers : Node :=
.section "containers" "Containers" #[
Containers.sequentialContainers,
Containers.associativeContainers,
Containers.persistentDataStructures
]
end GroveStdlib.Std.CoreTypesAndOperations

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.CoreTypesAndOperations
namespace Numbers
end Numbers
def numbers : Node :=
.section "numbers" "Numbers" #[]
end GroveStdlib.Std.CoreTypesAndOperations

View File

@@ -1,97 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.CoreTypesAndOperations
namespace StringsAndFormatting
open Lean Meta
def introduction : Text where
id := "string-introduction"
content := Grove.Markdown.render [
.h1 "The Lean string library",
.text "The Lean standard library contains a fully-featured string library, centered around the types `String` and `String.Slice`.",
.text "`String` is defined as the subtype of `ByteArray` of valid UTF-8 strings. A `String.Slice` is a `String` together with a start and end position.",
.text "`String` is equivalent to `List Char`, but it has a more efficient runtime representation. While the logical model based on `ByteArray` is overwritten in the runtime, the runtime implementation is very similar to the logical model, with the main difference being that the length of a string in Unicode code points is cached in the runtime implementation.",
.text "We are considering removing this feature in the future (i.e., deprecating `String.length`), as the number of UTF-8 codepoints in a string is not particularly useful, and if needed it can be computed in linear time using `s.positions.count`."
]
def highLevelStringTypes : List Lean.Name :=
[`String, `String.Slice, `String.Pos, `String.Slice.Pos]
def creatingStringsAndSlices : Text where
id := "transforming-strings-and-slices"
content := Grove.Markdown.render [
.h2 "Transforming strings and slices",
.text "The Lean standard library contains a number of functions that take one or more strings and slices and return a string or a slice.",
.text "If possible, these functions should avoid allocating a new string, and return a slice of their input(s) instead.",
.text "Usually, for every operation `f`, there will be functions `String.f` and `String.Slice.f`, where `String.f s` is defined as `String.Slice.f s.toSlice`.",
.text "In particular, functions that transform strings and slices should live in the `String` and `String.Slice` namespaces even if they involve a `String.Pos`/`String.Slice.Pos` (like `String.sliceTo`), for reasons that will become clear shortly.",
.h3 "Transforming positions",
.text "Since positions on strings and slices are dependent on the string or slice, whenever users transform a string/slice, they will be interested in interpreting positions on the original string/slice as positions on the result, or vice versa.",
.text "Consequently, every operation that transforms a string or slice should come with a corresponding set of transformations between positions, usually in both directions, possibly with one of the directions being conditional.",
.text "For example, given a string `s` and a position `p` on `s`, we have the slice `s.sliceFrom p`, which is the slice from `p` to the end of `s`. A position on `s.sliceFrom p` can always be interpreted as a position on `s`. This is the \"backwards\" transformation. Conversely, a position `q` on `s` can be interpreted as a position on `s.sliceFrom p` as long as `p ≤ q`. This is the conditional forwards direction.",
.text "The convention for naming these transformations is that the forwards transformation should have the same name as the transformation on strings/slices, but it should be located in the `String.Pos` or `String.Slice.Pos` namespace, depending on the type of the starting position (so that dot notation is possible for the forward direction). The backwards transformation should have the same name as the operation on strings/slices, but with an `of` prefix, and live in the same namespace as the forwards transformation (so in general dot notation will not be available).",
.text "So, in the `sliceFrom` example, the forward direction would be called `String.Pos.sliceFrom`, while the backwards direction should be called `String.Pos.ofSliceFrom` (not `String.Slice.Pos.ofSliceFrom`).",
.text "If one of the directions is conditional, it should have a corresponding panicking operation that does not require a proof; in our example this would be `String.Pos.sliceFrom!`.",
.text "Sometimes there is a name clash for the panicking operations if the operation on strings is already panicking. For example, there are both `String.slice` and `String.slice!`. If the original operation is already panicking, we only provide panicking transformation operations. But now `String.Pos.slice!` could refer both to the panicking forwards transformation associated with `String.slice`, and also to the (only) forwards transformation associated with `String.slice!`. In this situation, we use an `orPanic` suffix to disambiguate. So the panicking forwards operation associated with `String.slice` is called `String.Pos.sliceOrPanic`, and the forwards operation associated with `String.slice!` is called `String.Pos.slice!`."
]
-- TODO: also include the `HAppend` instance(s)
def sliceProducing : AssociationTable (β := Alias Lean.Name) .declaration
[`String, `String.Slice,
Alias.mk `String.Pos "string-pos-forwards" "String.Pos (forwards)",
Alias.mk `String.Pos "string-pos-backwards" "String.Pos (backwards)",
Alias.mk `String.Pos "string-pos-noproof" "String.Pos (no proof)",
Alias.mk `String.Slice.Pos "string-slice-pos-forwards" "String.Slice.Pos (forwards)",
Alias.mk `String.Slice.Pos "string-slice-pos-backwards" "String.Slice.Pos (backwards)",
Alias.mk `String.Slice.Pos "string-slice-pos-noproof" "String.Slice.Pos (no proof)"] where
id := "slice-producing"
title := "String functions returning strings or slices"
description := "Operations on strings and string slices that themselves return a new string slice."
dataSources n := DataSource.definitionsInNamespace n.inner
def sliceProducingComplete : Assertion where
widgetId := "slice-producing-complete"
title := "Slice-producing table is complete"
description := "All functions in the `String.**` namespace that return a string or a slice are covered in the table"
check := do
let mut ans := #[]
let covered := Std.HashSet.ofArray ( valuesInAssociationTable sliceProducing)
let pred : DataSource.DeclarationPredicate :=
DataSource.DeclarationPredicate.all [.isDefinition, .not .isDeprecated,
.notInNamespace `String.Pos.Raw, .notInNamespace `String.Legacy,
.not .isInstance]
let env getEnv
for name in declarationsMatching `String pred do
let some c := env.find? name | continue
if c.type.getForallBody.getUsedConstants.any (fun n => n == ``String || n == ``String.Slice) then
let success : Bool := name.toString covered
ans := ans.push {
assertionId := name.toString
description := s!"`{name}` should appear in the table."
passed := success
message := s!"`{name}` was{if success then "" else " not"} found in the table."
}
return ans
end StringsAndFormatting
open StringsAndFormatting
def stringsAndFormatting : Node :=
.section "strings-and-formatting" "Strings and formatting"
#[.text introduction,
.text creatingStringsAndSlices,
.associationTable sliceProducing,
.assertion sliceProducingComplete]
end GroveStdlib.Std.CoreTypesAndOperations

View File

@@ -1,26 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
import GroveStdlib.Std.LanguageConstructs.ComparisonOrderingHashing
import GroveStdlib.Std.LanguageConstructs.Monads
import GroveStdlib.Std.LanguageConstructs.RangesAndIterators
open Grove.Framework Widget
namespace GroveStdlib.Std
namespace LanguageConstructs
end LanguageConstructs
def languageConstructs : Node :=
.section "language-constructs" "Language constructs" #[
LanguageConstructs.comparisonOrderingHashing,
LanguageConstructs.monads,
LanguageConstructs.rangesAndIterators
]
end GroveStdlib.Std

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.LanguageConstructs
namespace ComparisonOrderingHashing
end ComparisonOrderingHashing
def comparisonOrderingHashing : Node :=
.section "comparison-ordering-hashing" "Comparison, ordering, hashing" #[]
end GroveStdlib.Std.LanguageConstructs

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.LanguageConstructs
namespace Monads
end Monads
def monads : Node :=
.section "monads" "Monads" #[]
end GroveStdlib.Std.LanguageConstructs

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.LanguageConstructs
namespace RangesAndIterators
end RangesAndIterators
def rangesAndIterators : Node :=
.section "ranges-and-iterators" "Ranges and iterators" #[]
end GroveStdlib.Std.LanguageConstructs

View File

@@ -1,24 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
import GroveStdlib.Std.Libraries.DateAndTime
import GroveStdlib.Std.Libraries.RandomNumbers
open Grove.Framework Widget
namespace GroveStdlib.Std
namespace Libraries
end Libraries
def libraries : Node :=
.section "libraries" "Libraries" #[
Libraries.dateAndTime,
Libraries.randomNumbers
]
end GroveStdlib.Std

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.Libraries
namespace DateAndTime
end DateAndTime
def dateAndTime : Node :=
.section "date-and-time" "Date and time" #[]
end GroveStdlib.Std.Libraries

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.Libraries
namespace RandomNumbers
end RandomNumbers
def randomNumbers : Node :=
.section "random-numbers" "Random numbers" #[]
end GroveStdlib.Std.Libraries

View File

@@ -1,30 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
import GroveStdlib.Std.OperatingSystemAbstractions.AsynchronousIO
import GroveStdlib.Std.OperatingSystemAbstractions.BasicIO
import GroveStdlib.Std.OperatingSystemAbstractions.ConcurrencyAndParallelism
import GroveStdlib.Std.OperatingSystemAbstractions.EnvironmentFileSystemProcesses
import GroveStdlib.Std.OperatingSystemAbstractions.Locales
open Grove.Framework Widget
namespace GroveStdlib.Std
namespace OperatingSystemAbstractions
end OperatingSystemAbstractions
def operatingSystemAbstractions : Node :=
.section "operating-system-abstractions" "Operating system abstractions" #[
OperatingSystemAbstractions.asynchronousIO,
OperatingSystemAbstractions.basicIO,
OperatingSystemAbstractions.concurrencyAndParallelism,
OperatingSystemAbstractions.environmentFileSystemProcesses,
OperatingSystemAbstractions.locales
]
end GroveStdlib.Std

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.OperatingSystemAbstractions
namespace AsynchronousIO
end AsynchronousIO
def asynchronousIO : Node :=
.section "asynchronous-io" "Asynchronous I/O" #[]
end GroveStdlib.Std.OperatingSystemAbstractions

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.OperatingSystemAbstractions
namespace BasicIO
end BasicIO
def basicIO : Node :=
.section "basic-io" "Basic I/O" #[]
end GroveStdlib.Std.OperatingSystemAbstractions

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.OperatingSystemAbstractions
namespace ConcurrencyAndParallelism
end ConcurrencyAndParallelism
def concurrencyAndParallelism : Node :=
.section "concurrency-and-parallelism" "Concurrency and parallelism" #[]
end GroveStdlib.Std.OperatingSystemAbstractions

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.OperatingSystemAbstractions
namespace EnvironmentFileSystemProcesses
end EnvironmentFileSystemProcesses
def environmentFileSystemProcesses : Node :=
.section "environment-filesystem-processes" "Environment, file system, processes" #[]
end GroveStdlib.Std.OperatingSystemAbstractions

View File

@@ -1,19 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import Grove.Framework
open Grove.Framework Widget
namespace GroveStdlib.Std.OperatingSystemAbstractions
namespace Locales
end Locales
def locales : Node :=
.section "locales" "Locales" #[]
end GroveStdlib.Std.OperatingSystemAbstractions

View File

@@ -1,18 +0,0 @@
/-
Copyright (c) 2025 Lean FRO, LLC. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Markus Himmel
-/
import GroveStdlib.Std
import GroveStdlib.Generated
def config : Grove.Framework.Project.Configuration where
projectNamespace := `GroveStdlib
def project : Grove.Framework.Project where
config := config
rootNode := GroveStdlib.std
restoreState := GroveStdlib.Generated.restoreState
def main (args : List String) : IO UInt32 :=
Grove.Framework.main project #[`Init, `Std, `Lean] args

View File

@@ -1,3 +0,0 @@
# Standard library QA
This directory contains the [Grove](github.com/TwoFX/grove) data files for the standard library.

View File

@@ -1,12 +0,0 @@
#!/bin/sh
lake exe grove-stdlib --full metadata.json
cd .lake/packages/grove/frontend
npm install
cp ../../../../metadata.json public/metadata.json
if [ -f "../../../../invalidated.json" ]; then
cp ../../../../invalidated.json public/invalidated.json
GROVE_DATA_LOCATION=public/metadata.json GROVE_UPSTREAM_INVALIDATED_FACTS_LOCATION=public/invalidated.json npm run dev
else
GROVE_DATA_LOCATION=public/metadata.json npm run dev
fi

View File

@@ -1,25 +0,0 @@
{"version": "1.1.0",
"packagesDir": ".lake/packages",
"packages":
[{"url": "https://github.com/TwoFx/grove.git",
"type": "git",
"subDir": "backend",
"scope": "",
"rev": "c580a425c9b7fa2aebaec2a1d8de16b2e2283c40",
"name": "grove",
"manifestFile": "lake-manifest.json",
"inputRev": "master",
"inherited": false,
"configFile": "lakefile.toml"},
{"url": "https://github.com/leanprover/lean4-cli",
"type": "git",
"subDir": null,
"scope": "leanprover",
"rev": "d9fc8ae23024be37424a189982c92356e37935c8",
"name": "Cli",
"manifestFile": "lake-manifest.json",
"inputRev": "nightly-testing",
"inherited": true,
"configFile": "lakefile.toml"}],
"name": "grovestdlib",
"lakeDir": ".lake"}

View File

@@ -1,18 +0,0 @@
name = "grovestdlib"
version = "0.1.0"
defaultTargets = ["grove-stdlib"]
[[require]]
name = "grove"
git = "https://github.com/TwoFx/grove.git"
rev = "master"
subDir = "backend"
[[lean_lib]]
name = "GroveStdlib"
root = "GroveStdlib"
[[lean_exe]]
name = "grove-stdlib"
supportInterpreter = true
root = "Main"

View File

@@ -1 +0,0 @@
lean4

View File

@@ -1,3 +0,0 @@
#!/bin/sh
lake exe grove-stdlib --invalidated invalidated.json

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 68 KiB

View File

@@ -1,260 +0,0 @@
# Standard library naming conventions
The easiest way to access a result in the standard library is to correctly guess the name of the declaration (possibly with the help of identifier autocompletion). This is faster and has lower friction than more sophisticated search tools, so easily guessable names (which are still reasonably short) make Lean users more productive.
The guide that follows contains very few hard rules, many heuristics and a selection of examples. It cannot and does not present a deterministic algorithm for choosing good names in all situations. It is intended as a living document that gets clarified and expanded as situations arise during code reviews for the standard library. If applying one of the suggestions in this guide leads to nonsensical results in a certain situation, it is
probably safe to ignore the suggestion (or even better, suggest a way to improve the suggestion).
## Prelude
Identifiers use a mix of `UpperCamelCase`, `lowerCamelCase` and `snake_case`, used for types, data, and theorems, respectively.
Structure fields should be named such that the projections have the correct names.
## Naming convention for types
When defining a type, i.e., a (possibly 0-ary) function whose codomain is Sort u for some u, it should be named in UpperCamelCase. Examples include `List`, and `List.IsPrefix`.
When defining a predicate, prefix the name by `Is`, like in `List.IsPrefix`. The `Is` prefix may be omitted if
* the resulting name would be ungrammatical, or
* the predicate depends on additional data in a way where the `Is` prefix would be confusing (like `List.Pairwise`), or
* the name is an adjective (like `Std.Time.Month.Ordinal.Valid`)
## Namespaces and generalized projection notation
Almost always, definitions and theorems relating to a type should be placed in a namespace with the same name as the type. For example, operations and theorems about lists should be placed in the `List` namespace, and operations and theorems about `Std.Time.PlainDate` should be placed in the `Std.Time.PlainDate` namespace.
Declarations in the root namespace will be relatively rare. The most common type of declaration in the root namespace are declarations about data and properties exported by notation type classes, as long as they are not about a specific type implementing that type class. For example, we have
```lean
theorem beq_iff_eq [BEq α] [LawfulBEq α] {a b : α} : a == b a = b := sorry
```
in the root namespace, but
```lean
theorem List.cons_beq_cons [BEq α] {a b : α} {l₁ l₂ : List α} :
(a :: l₁ == b :: l₂) = (a == b && l₁ == l₂) := rfl
```
belongs in the `List` namespace.
Subtleties arise when multiple namespaces are in play. Generally, place your theorem in the most specific namespace that appears in one of the hypotheses of the theorem. The following names are both correct according to this convention:
```lean
theorem List.Sublist.reverse : l₁ <+ l₂ l₁.reverse <+ l₂.reverse := sorry
theorem List.reverse_sublist : l₁.reverse <+ l₂.reverse l₁ <+ l₂ := sorry
```
Notice that the second theorem does not have a hypothesis of type `List.Sublist l` for some `l`, so the name `List.Sublist.reverse_iff` would be incorrect.
The advantage of placing results in a namespace like `List.Sublist` is that it enables generalized projection notation, i.e., given `h : l₁ <+ l₂`,
one can write `h.reverse` to obtain a proof of `l₁.reverse <+ l₂.reverse`. Thinking about which dot notations are convenient can act as a guideline
for deciding where to place a theorem, and is, on occasion, a good reason to duplicate a theorem into multiple namespaces.
### The `Std` namespace
New types that are added will usually be placed in the `Std` namespace and in the `Std/` source directory, unless there are good reasons to place
them elsewhere.
Inside the `Std` namespace, all internal declarations should be `private` or else have a name component that clearly marks them as internal, preferably
`Internal`.
## Naming convention for data
When defining data, i.e., a (possibly 0-ary) function whose codomain is not Sort u, but has type Type u for some u, it should be named in lowerCamelCase. Examples include `List.append` and `List.isPrefixOf`.
If your data is morally fully specified by its type, then use the naming procedure for theorems described below and convert the result to lower camel case.
If your function returns an `Option`, consider adding `?` as a suffix. If your function may panic, consider adding `!` as a suffix. In many cases, there will be multiple variants of a function; one returning an option, one that may panic and possibly one that takes a proof argument.
## Naming algorithm for theorems and some definitions
There is, in principle, a general algorithm for naming a theorem. The problem with this algorithm is that it produces very long and unwieldy names which need to be shortened. So choosing a name for a declaration can be thought of as consisting of a mechanical part and a creative part.
Usually the first part is to decide which namespace the result should live in, according to the guidelines described above.
Next, consider the type of your declaration as a tree. Inner nodes of this tree are function types or function applications. Leaves of the tree are 0-ary functions or bound variables.
As an example, consider the following result from the standard library:
```lean
example {α : Type u} {β : Type v} [BEq α] [Hashable α] [EquivBEq α] [LawfulHashable α]
[Inhabited β] {m : Std.HashMap α β} {a : α} {h' : a m} : m[a]? = some (m[a]'h') :=
sorry
```
The correct namespace is clearly `Std.HashMap`. The corresponding tree looks like this:
![](naming-tree.svg)
The preferred spelling of a notation can be looked up by hovering over the notation.
Now traverse the tree and build a name according to the following rules:
* When encountering a function type, first turn the result type into a name, then all of the argument types from left to right, and join the names using `_of_`.
* When encountering a function that is neither an infix notation nor a structure projection, first put the function name and then the arguments, joined by an underscore.
* When encountering an infix notation, join the arguments using the name of the notation, separated by underscores.
* When encountering a structure projection, proceed as for normal functions, but put the name of the projection last.
* When encountering a name, put it in lower camel case.
* Skip bound variables and proofs.
* Type class arguments are also generally skipped.
When encountering namespaces names, concatenate them in lower camel case.
Applying this algorithm to our example yields the name `Std.HashMap.getElem?_eq_optionSome_getElem_of_mem`.
From there, the name should be shortened, using the following heuristics:
* The namespace of functions can be omitted if it is clear from context or if the namespace is the current one. This is almost always the case.
* For infix operators, it is possible to leave out the RHS or the name of the notation and the RHS if they are clear from context.
* Hypotheses can be left out if it is clear that they are required or if they appear in the conclusion.
Based on this, here are some possible names for our example:
1. `Std.HashMap.getElem?_eq`
2. `Std.HashMap.getElem?_eq_of_mem`
3. `Std.HashMap.getElem?_eq_some`
4. `Std.HashMap.getElem?_eq_some_of_mem`
5. `Std.HashMap.getElem?_eq_some_getElem`
6. `Std.Hashmap.getElem?_eq_some_getElem_of_mem`
Choosing a good name among these then requires considering the context of the lemma. In this case it turns out that the first four options are underspecified as there is also a lemma relating `m[a]?` and `m[a]!` which could have the same name. This leaves the last two options, the first of which is shorter, and this is how the lemma is called in the Lean standard library.
Here are some additional examples:
```lean
example {x y : List α} (h : x <+: y) (hx : x []) :
x.head hx = y.head (h.ne_nil hx) := sorry
```
Since we have an `IsPrefix` parameter, this should live in the `List.IsPrefix` namespace, and the algorithm suggests `List.IsPrefix.head_eq_head_of_ne_nil`, which is shortened to `List.IsPrefix.head`. Note here the difference between the namespace name (`IsPrefix`) and the recommended spelling of the corresponding notation (`prefix`).
```lean
example : l₁ <+: l₂ reverse l₁ <:+ reverse l₂ := sorry
```
Again, this result should be in the `List.IsPrefix` namespace; the algorithm suggests `List.IsPrefix.reverse_prefix_reverse`, which becomes `List.IsPrefix.reverse`.
The following examples show how the traversal order often matters.
```lean
theorem Nat.mul_zero (n : Nat) : n * 0 = 0 := sorry
theorem Nat.zero_mul (n : Nat) : 0 * n = 0 := sorry
```
Here we see that one name may be a prefix of another name:
```lean
theorem Int.mul_ne_zero {a b : Int} (a0 : a 0) (b0 : b 0) : a * b 0 := sorry
theorem Int.mul_ne_zero_iff {a b : Int} : a * b 0 a 0 b 0 := sorry
```
It is usually a good idea to include the `iff` in a theorem name even if the name would still be unique without the name. For example,
```lean
theorem List.head?_eq_none_iff : l.head? = none l = [] := sorry
```
is a good name: if the lemma was simply called `List.head?_eq_none`, users might try to `apply` it when the goal is `l.head? = none`, leading
to confusion.
The more common you expect (or want) a theorem to be, the shorter you should try to make the name. For example, we have both
```lean
theorem Std.HashMap.getElem?_eq_none_of_contains_eq_false {a : α} : m.contains a = false m[a]? = none := sorry
theorem Std.HashMap.getElem?_eq_none {a : α} : ¬a m m[a]? = none := sorry
```
As users of the hash map are encouraged to use ∈ rather than contains, the second lemma gets the shorter name.
## Special cases
There are certain special “keywords” that may appear in identifiers.
| Keyword | Meaning | Example |
| :---- | :---- | :---- |
| `def` | Unfold a definition. Avoid this for public APIs. | `Nat.max_def` |
| `refl` | Theorems of the form `a R a`, where R is a reflexive relation and `a` is an explicit parameter | `Nat.le_refl` |
| `rfl` | Like `refl`, but with `a` implicit | `Nat.le_rfl` |
| `irrefl` | Theorems of the form `¬a R a`, where R is an irreflexive relation | `Nat.lt_irrefl` |
| `symm` | Theorems of the form `a R b → b R a`, where R is a symmetric relation (compare `comm` below) | `Eq.symm` |
| `trans` | Theorems of the form `a R b → b R c → a R c`, where R is a transitive relation (R may carry data) | `Eq.trans` |
| `antisymmm` | Theorems of the form `a R b → b R a → a = b`, where R is an antisymmetric relation | `Nat.le_antisymm` |
| `congr` | Theorems of the form `a R b → f a S f b`, where R and S are usually equivalence relations | `Std.HashMap.mem_congr` |
| `comm` | Theorems of the form `f a b = f b a` (compare `symm` above) | `Eq.comm`, `Nat.add_comm` |
| `assoc` | Theorems of the form `g (f a b) c = f a (g b c)` (note the order! In most cases, we have f = g) | `Nat.add_sub_assoc` |
| `distrib` | Theorems of the form `f (g a b) = g (f a) (f b)` | `Nat.add_left_distrib` |
| `self` | May be used if a variable appears multiple times in the conclusion | `List.mem_cons_self` |
| `inj` | Theorems of the form `f a = f b ↔ a = b`. | `Int.neg_inj`, `Nat.add_left_inj` |
| `cancel` | Theorems which have one of the forms `f a = f b → a = b` or `g (f a) = a`, where `f` and `g` usually involve a binary operator | `Nat.add_sub_cancel` |
| `cancel_iff` | Same as `inj`, but with different conventions for left and right (see below) | `Nat.add_right_cancel_iff` |
| `ext` | Theorems of the form `f a = f b → a = b`, where `f` usually involves some kind of projection | `List.ext_getElem`
| `mono` | Theorems of the form `a R b → f a R f b`, where `R` is a transitive relation | `List.countP_mono_left`
### Left and right
The keywords left and right are useful to disambiguate symmetric variants of theorems.
```lean
theorem imp_congr_left (h : a b) : (a c) (b c) := sorry
theorem imp_congr_right (h : a (b c)) : (a b) (a c) := sorry
```
It is not always obvious which version of a theorem should be “left” and which should be “right”.
Heuristically, the theorem should name the side which is “more variable”, but there are exceptions. For some of the special keywords discussed in this section, there are conventions which should be followed, as laid out in the following examples:
```lean
theorem Nat.left_distrib (n m k : Nat) : n * (m + k) = n * m + n * k := sorry
theorem Nat.right_distrib (n m k : Nat) : (n + m) * k = n * k + m * k := sorry
theorem Nat.add_left_cancel {n m k : Nat} : n + m = n + k m = k := sorry
theorem Nat.add_right_cancel {n m k : Nat} : n + m = k + m n = k := sorry
theorem Nat.add_left_cancel_iff {m k n : Nat} : n + m = n + k m = k := sorry
theorem Nat.add_right_cancel_iff {m k n : Nat} : m + n = k + n m = k := sorry
theorem Nat.add_left_inj {m k n : Nat} : m + n = k + n m = k := sorry
theorem Nat.add_right_inj {m k n : Nat} : n + m = n + k m = k := sorry
```
Note in particular that the convention is opposite for `cancel_iff` and `inj`.
```lean
theorem Nat.add_sub_self_left (a b : Nat) : (a + b) - a = b := sorry
theorem Nat.add_sub_self_right (a b : Nat) : (a + b) - b = a := sorry
theorem Nat.add_sub_cancel (n m : Nat) : (n + m) - m = n := sorry
```
## Primed names
Avoid disambiguating variants of a concept by appending the `'` character (e.g., introducing both `BitVec.sshiftRight` and `BitVec.sshiftRight'`), as it is impossible to tell the difference without looking at the type signature, the documentation or even the code, and even if you know what the two variants are there is no way to tell which is which. Prefer descriptive pairs `BitVec.sshiftRightNat`/`BitVec.sshiftRight`.
## Acronyms
For acronyms which are three letters or shorter, all letters should use the same case as dictated by the convention. For example, `IO` is a correct name for a type and the name `IO.Ref` may become `IORef` when used as part of a definition name and `ioRef` when used as part of a theorem name.
For acronyms which are at least four letters long, switch to lower case starting from the second letter. For example, `Json` is a correct name for a type, as is `JsonRPC`.
If an acronym is typically spelled using mixed case, this mixed spelling may be used in identifiers (for example `Std.Net.IPv4Addr`).
## Simp sets
Simp sets centered around a conversion function should be called `source_to_target`. For example, a simp set for the `BitVec.toNat` function, which goes from `BitVec` to
`Nat`, should be called `bitvec_to_nat`.
## Variable names
We make the following recommendations for variable names, but without insisting on them:
* Simple hypotheses should be named `h`, `h'`, or using a numerical sequence `h₁`, `h₂`, etc.
* Another common name for a simple hypothesis is `w` (for "witness").
* `List`s should be named `l`, `l'`, `l₁`, etc, or `as`, `bs`, etc.
(Use of `as`, `bs` is encouraged when the lists are of different types, e.g. `as : List α` and `bs : List β`.)
`xs`, `ys`, `zs` are allowed, but it is better if these are reserved for `Array` and `Vector`.
A list of lists may be named `L`.
* `Array`s should be named `xs`, `ys`, `zs`, although `as`, `bs` are encouraged when the arrays are of different types, e.g. `as : Array α` and `bs : Array β`.
An array of arrays may be named `xss`.
* `Vector`s should be named `xs`, `ys`, `zs`, although `as`, `bs` are encouraged when the vectors are of different types, e.g. `as : Vector α n` and `bs : Vector β n`.
A vector of vectors may be named `xss`.
* A common exception for `List` / `Array` / `Vector` is to use `acc` for an accumulator in a recursive function.
* `i`, `j`, `k` are preferred for numerical indices.
Descriptive names such as `start`, `stop`, `lo`, and `hi` are encouraged when they increase readability.
* `n`, `m` are preferred for sizes, e.g. in `Vector α n` or `xs.size = n`.
* `w` is preferred for the width of a `BitVec`.

View File

@@ -1,522 +0,0 @@
# Standard library style
Please take some time to familiarize yourself with the stylistic conventions of
the project and the specific part of the library you are planning to contribute
to. While the Lean compiler may not enforce strict formatting rules,
consistently formatted code is much easier for others to read and maintain.
Attention to formatting is more than a cosmetic concern—it reflects the same
level of precision and care required to meet the deeper standards of the Lean 4
standard library.
Below we will give specific formatting prescriptions for various language constructs. Note that this style guide only applies to the Lean standard library, even though some examples in the guide are taken from other parts of the Lean code base.
## Basic whitespace rules
Syntactic elements (like `:`, `:=`, `|`, `::`) are surrounded by single spaces, with the exception of `,` and `;`, which are followed by a space but not preceded by one. Delimiters (like `()`, `{}`) do not have spaces on the inside, with the exceptions of subtype notation and structure instance notation.
Examples of correctly formatted function parameters:
* `{α : Type u}`
* `[BEq α]`
* `(cmp : αα → Ordering)`
* `(hab : a = b)`
* `{d : { l : List ((n : Nat) × Vector Nat n) // l.length % 2 = 0 }}`
Examples of correctly formatted terms:
* `1 :: [2, 3]`
* `letI : Ord α := ⟨cmp⟩; True`
* `(⟨2, 3⟩ : Nat × Nat)`
* `((2, 3) : Nat × Nat)`
* `{ x with fst := f (4 + f 0), snd := 4, .. }`
* `match 1 with | 0 => 0 | _ => 0`
* `fun ⟨a, b⟩ _ _ => by cases hab <;> apply id; rw [hbc]`
Configure your editor to remove trailing whitespace. If you have set up Visual Studio Code for Lean development in the recommended way then the correct setting is applied automatically.
## Splitting terms across multiple lines
When splitting a term across multiple lines, increase indentation by two spaces starting from the second line. When splitting a function application, try to split at argument boundaries. If an argument itself needs to be split, increase indentation further as appropriate.
When splitting at an infix operator, the operator goes at the end of the first line, not at the beginning of the second line. When splitting at an infix operator, you may or may not increase indentation depth, depending on what is more readable.
When splitting an `if`-`then`-`else` expression, the `then` keyword wants to stay with the condition and the `else` keyword wants to stay with the alternative term. Otherwise, indent as if the `if` and `else` keywords were arguments to the same function.
When splitting a comma-separated bracketed sequence (i.e., anonymous constructor application, list/array/vector literal, tuple) it is allowed to indent subsequent lines for alignment, but indenting by two spaces is also allowed.
Do not orphan parentheses.
Correct:
```lean
def MacroScopesView.isPrefixOf (v₁ v₂ : MacroScopesView) : Bool :=
v₁.name.isPrefixOf v₂.name &&
v₁.scopes == v₂.scopes &&
v₁.mainModule == v₂.mainModule &&
v₁.imported == v₂.imported
```
Correct:
```lean
theorem eraseP_eq_iff {p} {l : List α} :
l.eraseP p = l'
(( a l, ¬ p a) l = l')
a l₁ l₂, ( b l₁, ¬ p b) p a
l = l₁ ++ a :: l₂ l' = l₁ ++ l₂ :=
sorry
```
Correct:
```lean
example : Nat :=
functionWithAVeryLongNameSoThatSomeArgumentsWillNotFit firstArgument secondArgument
(firstArgumentWithAnEquallyLongNameAndThatFunctionDoesHaveMoreArguments firstArgument
secondArgument)
secondArgument
```
Correct:
```lean
theorem size_alter [LawfulBEq α] {k : α} {f : Option (β k) Option (β k)} (h : m.WF) :
(m.alter k f).size =
if m.contains k && (f (m.get? k)).isNone then
m.size - 1
else if !m.contains k && (f (m.get? k)).isSome then
m.size + 1
else
m.size := by
simp_to_raw using Raw₀.size_alter
```
Correct:
```lean
theorem get?_alter [LawfulBEq α] {k k' : α} {f : Option (β k) Option (β k)} (h : m.WF) :
(m.alter k f).get? k' =
if h : k == k' then
cast (congrArg (Option β) (eq_of_beq h)) (f (m.get? k))
else m.get? k' := by
simp_to_raw using Raw₀.get?_alter
```
Correct:
```lean
example : Nat × Nat :=
imagineThisWasALongTerm,
imagineThisWasAnotherLongTerm
```
Correct:
```lean
example : Nat × Nat :=
imagineThisWasALongTerm,
imagineThisWasAnotherLongTerm
```
Correct:
```lean
example : Vector Nat :=
#v[imagineThisWasALongTerm,
imagineThisWasAnotherLongTerm]
```
## Basic file structure
Every file should start with a copyright header, imports (in the standard library, this always includes a `prelude` declaration) and a module documentation string. There should not be a blank line between the copyright header and the imports. There should be a blank line between the imports and the module documentation string.
If you explicitly declare universe variables, do so at the top of the file, after the module documentation.
Correct:
```lean
/-
Copyright (c) 2014 Parikshit Khanna. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Parikshit Khanna, Jeremy Avigad, Leonardo de Moura, Floris van Doorn, Mario Carneiro,
Yury Kudryashov
-/
prelude
import Init.Data.List.Pairwise
import Init.Data.List.Find
/-!
**# Lemmas about `List.eraseP` and `List.erase`.**
-/
universe u u'
```
Syntax that is not supposed to be user-facing must be scoped. New public syntax must always be discussed explicitly in an RFC.
## Top-level commands and declarations
All top-level commands are unindented. Sectioning commands like `section` and `namespace` do not increase the indentation level.
Attributes may be placed on the same line as the rest of the command or on a separate line.
Multi-line declaration headers are indented by four spaces starting from the second line. The colon that indicates the type of a declaration may not be placed at the start of a line or on its own line.
Declaration bodies are indented by two spaces. Short declaration bodies may be placed on the same line as the declaration type.
Correct:
```lean
theorem eraseP_eq_iff {p} {l : List α} :
l.eraseP p = l'
(( a l, ¬ p a) l = l')
a l₁ l₂, ( b l₁, ¬ p b) p a
l = l₁ ++ a :: l₂ l' = l₁ ++ l₂ :=
sorry
```
Correct:
```lean
@[simp] theorem eraseP_nil : [].eraseP p = [] := rfl
```
Correct:
```lean
@[simp]
theorem eraseP_nil : [].eraseP p = [] := rfl
```
### Documentation comments
Note to external contributors: this is a section where the Lean style and the mathlib style are different.
Declarations should be documented as required by the `docBlame` linter, which may be activated in a file using
`set_option linter.missingDocs true` (we allow these to stay in the file).
Single-line documentation comments should go on the same line as `/--`/`-/`, while multi-line documentation strings
should have these delimiters on their own line, with the documentation comment itself unindented.
Documentation comments must be written in the indicative mood. Use American orthography.
Correct:
```lean
/-- Carries out a monadic action on each mapping in the hash map in some order. -/
@[inline] def forM (f : (a : α) β a m PUnit) (b : Raw α β) : m PUnit :=
b.buckets.forM (AssocList.forM f)
```
Correct:
```lean
/--
Monadically computes a value by folding the given function over the mappings in the hash
map in some order.
-/
@[inline] def foldM (f : δ (a : α) β a m δ) (init : δ) (b : Raw α β) : m δ :=
b.buckets.foldlM (fun acc l => l.foldlM f acc) init
```
### Where clauses
The `where` keyword should be unindented, and all declarations bound by it should be indented with two spaces.
Blank lines before and after `where` and between declarations bound by `where` are optional and should be chosen
to maximize readability.
Correct:
```lean
@[simp] theorem partition_eq_filter_filter (p : α Bool) (l : List α) :
partition p l = (filter p l, filter (not p) l) := by
simp [partition, aux]
where
aux (l) {as bs} : partition.loop p l (as, bs) =
(as.reverse ++ filter p l, bs.reverse ++ filter (not p) l) :=
match l with
| [] => by simp [partition.loop, filter]
| a :: l => by cases pa : p a <;> simp [partition.loop, pa, aux, filter, append_assoc]
```
### Termination arguments
The `termination_by`, `decreasing_by`, `partial_fixpoint` keywords should be unindented. The associated terms should be indented like declaration bodies.
Correct:
```lean
@[inline] def multiShortOption (handle : Char m PUnit) (opt : String) : m PUnit := do
let rec loop (p : String.Pos) := do
if h : opt.atEnd p then
return
else
handle (opt.get' p h)
loop (opt.next' p h)
termination_by opt.utf8ByteSize - p.byteIdx
decreasing_by
simp [String.atEnd] at h
apply Nat.sub_lt_sub_left h
simp [String.lt_next opt p]
loop 1
```
Correct:
```lean
def substrEq (s1 : String) (off1 : String.Pos) (s2 : String) (off2 : String.Pos) (sz : Nat) : Bool :=
off1.byteIdx + sz s1.endPos.byteIdx && off2.byteIdx + sz s2.endPos.byteIdx && loop off1 off2 { byteIdx := off1.byteIdx + sz }
where
loop (off1 off2 stop1 : Pos) :=
if _h : off1.byteIdx < stop1.byteIdx then
let c₁ := s1.get off1
let c₂ := s2.get off2
c₁ == c₂ && loop (off1 + c₁) (off2 + c₂) stop1
else true
termination_by stop1.1 - off1.1
decreasing_by
have := Nat.sub_lt_sub_left _h (Nat.add_lt_add_left c₁.utf8Size_pos off1.1)
decreasing_tactic
```
Correct:
```lean
theorem div_add_mod (m n : Nat) : n * (m / n) + m % n = m := by
rw [div_eq, mod_eq]
have h : Decidable (0 < n n m) := inferInstance
cases h with
| isFalse h => simp [h]
| isTrue h =>
simp [h]
have ih := div_add_mod (m - n) n
rw [Nat.left_distrib, Nat.mul_one, Nat.add_assoc, Nat.add_left_comm, ih, Nat.add_comm, Nat.sub_add_cancel h.2]
decreasing_by apply div_rec_lemma; assumption
```
### Deriving
The `deriving` clause should be unindented.
Correct:
```lean
structure Iterator where
array : ByteArray
idx : Nat
deriving Inhabited
```
## Notation and Unicode
We generally prefer to use notation as available. We usually prefer the Unicode versions of notations over non-Unicode alternatives.
There are some rules and exceptions regarding specific notations which are listed below:
* Sigma types: use `(a : α) × β a` instead of `Σ a, β a` or `Sigma β`.
* Function arrows: use `fun a => f x` instead of `fun x ↦ f x` or `λ x => f x` or any other variant.
## Language constructs
### Pattern matching, induction etc.
Match arms are indented at the indentation level that the match statement would have if it was on its own line. If the match is implicit, then the arms should be indented as if the match was explicitly given. The content of match arms is indented two spaces, so that it appears on the same level as the match pattern.
Correct:
```lean
def alter [BEq α] {β : Type v} (a : α) (f : Option β Option β) :
AssocList α (fun _ => β) AssocList α (fun _ => β)
| nil => match f none with
| none => nil
| some b => AssocList.cons a b nil
| cons k v l =>
if k == a then
match f v with
| none => l
| some b => cons a b l
else
cons k v (alter a f l)
```
Correct:
```lean
theorem eq_append_cons_of_mem {a : α} {xs : List α} (h : a xs) :
as bs, xs = as ++ a :: bs a as := by
induction xs with
| nil => cases h
| cons x xs ih =>
simp at h
cases h with
| inl h => exact [], xs, by simp_all
| inr h =>
by_cases h' : a = x
· subst h'
exact [], xs, by simp
· obtain as, bs, rfl, h := ih h
exact x :: as, bs, rfl, by simp_all
```
Aligning match arms is allowed, but not required.
Correct:
```lean
def mkEqTrans? (h₁? h₂? : Option Expr) : MetaM (Option Expr) :=
match h₁?, h₂? with
| none, none => return none
| none, some h => return h
| some h, none => return h
| some h₁, some h₂ => mkEqTrans h₁ h₂
```
Correct:
```lean
def mkEqTrans? (h₁? h₂? : Option Expr) : MetaM (Option Expr) :=
match h₁?, h₂? with
| none, none => return none
| none, some h => return h
| some h, none => return h
| some h₁, some h₂ => mkEqTrans h₁ h₂
```
Correct:
```lean
def mkEqTrans? (h₁? h₂? : Option Expr) : MetaM (Option Expr) :=
match h₁?, h₂? with
| none, none => return none
| none, some h => return h
| some h, none => return h
| some h₁, some h₂ => mkEqTrans h₁ h₂
```
### Structures
Note to external contributors: this is a section where the Lean style and the mathlib style are different.
When using structure instance syntax over multiple lines, the opening brace should go on the preceding line, while the closing brace should go on its own line. The rest of the syntax should be indented by one level. During structure updates, the `with` clause goes on the same line as the opening brace. Aligning at the assignment symbol is allowed but not required.
Correct:
```lean
def addConstAsync (env : Environment) (constName : Name) (kind : ConstantKind) (reportExts := true) :
IO AddConstAsyncResult := do
let sigPromise IO.Promise.new
let infoPromise IO.Promise.new
let extensionsPromise IO.Promise.new
let checkedEnvPromise IO.Promise.new
let asyncConst := {
constInfo := {
name := constName
kind
sig := sigPromise.result
constInfo := infoPromise.result
}
exts? := guard reportExts *> some extensionsPromise.result
}
return {
constName, kind
mainEnv := { env with
asyncConsts := env.asyncConsts.add asyncConst
checked := checkedEnvPromise.result }
asyncEnv := { env with
asyncCtx? := some { declPrefix := privateToUserName constName.eraseMacroScopes }
}
sigPromise, infoPromise, extensionsPromise, checkedEnvPromise
}
```
Correct:
```lean
instance [Inhabited α] : Inhabited (Descr α β σ) where
default := {
name := default
mkInitial := default
ofOLeanEntry := default
toOLeanEntry := default
addEntry := fun s _ => s
}
```
### Declaring structures
When defining structure types, do not parenthesize structure fields.
When declaring a structure type with a custom constructor name, put the custom name on its own line, indented like the
structure fields, and add a documentation comment.
Correct:
```lean
/--
A bitvector of the specified width.
This is represented as the underlying `Nat` number in both the runtime
and the kernel, inheriting all the special support for `Nat`.
-/
structure BitVec (w : Nat) where
/--
Constructs a `BitVec w` from a number less than `2^w`.
O(1), because we use `Fin` as the internal representation of a bitvector.
-/
ofFin ::
/--
Interprets a bitvector as a number less than `2^w`.
O(1), because we use `Fin` as the internal representation of a bitvector.
-/
toFin : Fin (2 ^ w)
```
## Tactic proofs
Tactic proofs are the most common thing to break during any kind of upgrade, so it is important to write them in a way that minimizes the likelihood of proofs breaking and that makes it easy to debug breakages if they do occur.
If there are multiple goals, either use a tactic combinator (like `all_goals`) to operate on all of them or a clearly specified subset, or use focus dots to work on goals one at a time. Using structured proofs (e.g., `induction … with`) is encouraged but not mandatory.
Squeeze non-terminal `simp`s (i.e., calls to `simp` which do not close the goal). Squeezing terminal `simp`s is generally discouraged, although there are exceptions (for example if squeezing yields a noticeable performance improvement).
Do not over-golf proofs in ways that are likely to lead to hard-to-debug breakage. Examples of things to avoid include complex multi-goal manipulation using lots of tactic combinators, complex uses of the substitution operator (`▸`) and clever point-free expressions (possibly involving anonymous function notation for multiple arguments).
Do not under-golf proofs: for routine tasks, use the most powerful tactics available.
Do not use `erw`. Avoid using `rfl` after `simp` or `rw`, as this usually indicates a missing lemma that should be used instead of `rfl`.
Use `(d)simp` or `rw` instead of `delta` or `unfold`. Use `refine` instead of `refine`. Use `haveI` and `letI` only if they are actually required.
Prefer highly automated tactics (like `grind` and `omega`) over low-level proofs, unless the automated tactic requires unacceptable additional imports or has bad performance. If you decide against using a highly automated tactic, leave a comment explaining the decision.
## `do` notation
The `do` keyword goes on the same line as the corresponding `:=` (or `=>`, or similar). `Id.run do` should be treated as if it was a bare `do`.
Use early `return` statements to reduce nesting depth and make the non-exceptional control flow of a function easier to see.
Alternatives for `let` matches may be placed in the same line or in the next line, indented by two spaces. If the term that is
being matched on is itself more than one line and there is an alternative present, consider breaking immediately after `←` and indent
as far as necessary to ensure readability.
Correct:
```lean
def getFunDecl (fvarId : FVarId) : CompilerM FunDecl := do
let some decl findFunDecl? fvarId | throwError "unknown local function {fvarId.name}"
return decl
```
Correct:
```lean
def getFunDecl (fvarId : FVarId) : CompilerM FunDecl := do
let some decl
findFunDecl? fvarId
| throwError "unknown local function {fvarId.name}"
return decl
```
Correct:
```lean
def getFunDecl (fvarId : FVarId) : CompilerM FunDecl := do
let some decl findFunDecl?
fvarId
| throwError "unknown local function {fvarId.name}"
return decl
```
Correct:
```lean
def tagUntaggedGoals (parentTag : Name) (newSuffix : Name) (newGoals : List MVarId) : TacticM Unit := do
let mctx getMCtx
let mut numAnonymous := 0
for g in newGoals do
if mctx.isAnonymousMVar g then
numAnonymous := numAnonymous + 1
modifyMCtx fun mctx => Id.run do
let mut mctx := mctx
let mut idx := 1
for g in newGoals do
if mctx.isAnonymousMVar g then
if numAnonymous == 1 then
mctx := mctx.setMVarUserName g parentTag
else
mctx := mctx.setMVarUserName g (parentTag ++ newSuffix.appendIndexAfter idx)
idx := idx + 1
pure mctx
```

View File

@@ -1,98 +0,0 @@
# The Lean 4 standard library
Maintainer team (in alphabetical order): Henrik Böving, Markus Himmel
(community contact & external contribution coordinator), Kim Morrison, Paul
Reichert, Sofia Rodrigues.
The Lean 4 standard library is a core part of the Lean distribution, providing
essential building blocks for functional programming, verified software
development, and software verification. Unlike the standard libraries of most
other languages, many of its components are formally verified and can be used
as part of verified applications.
The standard library is a public API that contains the components listed in the
standard library outline below. Not all public APIs in the Lean distribution
are part of the standard library, and the standard library does not correspond
to a certain directory within the Lean source repository (like `Std`). For
example, the metaprogramming framework is not part of the standard library, but
basic types like `True` and `Nat` are.
The standard library is under active development. Our guiding principles are:
* Provide comprehensive, verified building blocks for real-world software.
* Build a public API of the highest quality with excellent internal consistency.
* Carefully optimize components that may be used in performance-critical software.
* Ensure smooth adoption and maintenance for users.
* Offer excellent documentation, example projects, and guides.
* Provide a reliable and extensible basis that libraries for software
development, software verification and mathematics can build on.
The standard library is principally developed by the Lean FRO. Community
contributions are welcome. If you would like to contribute, please refer to the
call for contributions below.
### Standard library outline
1. Core types and operations
1. Basic types
2. Numeric types, including floating point numbers
3. Containers
4. Strings and formatting
2. Language constructs
1. Ranges and iterators
2. Comparison, ordering, hashing and related type classes
3. Basic monad infrastructure
3. Libraries
1. Random numbers
2. Dates and times
4. Operating system abstractions
1. Concurrency and parallelism primitives
2. Asynchronous I/O
3. FFI helpers
4. Environment, file system, processes
5. Locales
The material covered in the first three sections (core types and operations,
language constructs and libraries) will be verified, with the exception of
floating point numbers and the parts of the libraries that interface with the
operating system (e.g., sources of operating system randomness or time zone
database access).
### Call for contributions
Thank you for taking interest in contributing to the Lean standard library\!
There are two main ways for community members to contribute to the Lean
standard library: by contributing experience reports or by contributing code
and lemmas.
**If you are using Lean for software verification or verified software
development:** hearing about your experiences using Lean and its standard
library for software verification is extremely valuable to us. We are committed
to building a standard library suitable for real-world applications and your
input will directly influence the continued evolution of the Lean standard
library. Please reach out to the standard library maintainer team via Zulip
(either in a public thread in the \#lean4 channel or via direct message). Even
just a link to your code helps. Thanks\!
**If you have code that you believe could enhance the Lean 4 standard
library:** we encourage you to initiate a discussion in the \#lean4 channel on
Zulip. This is the most effective way to receive preliminary feedback on your
contribution. The Lean standard library has a very precise scope and it has
very high quality standards, so at the moment we are mostly interested in
contributions that expand upon existing material rather than introducing novel
concepts.
**If you would like to contribute code to the standard library but dont know
what to work on:** we are always excited to meet motivated community members
who would like to contribute, and there is always impactful work that is
suitable for new contributors. Please reach out to Markus Himmel on Zulip to
discuss possible contributions.
As laid out in the [project-wide External Contribution
Guidelines](../../CONTRIBUTING.md),
PRs are much more likely to be merged if they are preceded by an RFC or if you
discussed your planned contribution with a member of the standard library
maintainer team. When in doubt, introducing yourself is always a good idea.
All code in the standard library is expected to strictly adhere to the
[standard library coding conventions](./style.md).

View File

@@ -18,15 +18,14 @@
# An old nixpkgs for creating releases with an old glibc
pkgsDist-old-aarch = import inputs.nixpkgs-old { localSystem.config = "aarch64-unknown-linux-gnu"; };
llvmPackages = pkgs.llvmPackages_15;
lean-packages = pkgs.callPackage (./nix/packages.nix) { src = ./.; };
devShellWithDist = pkgsDist: pkgs.mkShell.override {
stdenv = pkgs.overrideCC pkgs.stdenv llvmPackages.clang;
stdenv = pkgs.overrideCC pkgs.stdenv lean-packages.llvmPackages.clang;
} ({
buildInputs = with pkgs; [
cmake gmp libuv ccache pkg-config
llvmPackages.bintools # wrapped lld
llvmPackages.llvm # llvm-symbolizer for asan/lsan
lean-packages.llvmPackages.llvm # llvm-symbolizer for asan/lsan
gdb
tree # for CI
];
@@ -61,6 +60,12 @@
GDB = pkgsDist.gdb;
});
in {
packages.${system} = {
# to be removed when Nix CI is not needed anymore
inherit (lean-packages) cacheRoots test update-stage0-commit ciShell;
deprecated = lean-packages;
};
devShells.${system} = {
# The default development shell for working on lean itself
default = devShellWithDist pkgs;

View File

@@ -8,15 +8,9 @@
},
{
"path": "tests"
},
{
"path": "script"
}
],
"settings": {
// Open terminal at root, not current workspace folder
// (there is not way to directly refer to the root folder included as `.` above)
"terminal.integrated.cwd": "${workspaceFolder:src}/..",
"files.insertFinalNewline": true,
"files.trimTrailingWhitespace": true,
"cmake.buildDirectory": "${workspaceFolder}/build/release",
@@ -43,15 +37,6 @@
"isDefault": true
}
},
{
"label": "build-old",
"type": "shell",
"command": "make -C build/release -j$(nproc 2>/dev/null || sysctl -n hw.logicalcpu 2>/dev/null || echo 4) LAKE_EXTRA_ARGS=--old",
"problemMatcher": [],
"group": {
"kind": "build"
}
},
{
"label": "test",
"type": "shell",

7
nix/bareStdenv/setup Normal file
View File

@@ -0,0 +1,7 @@
set -eo pipefail
for pkg in $buildInputs; do
export PATH=$PATH:$pkg/bin
done
: ${outputs:=out}

208
nix/bootstrap.nix Normal file
View File

@@ -0,0 +1,208 @@
{ src, debug ? false, stage0debug ? false, extraCMakeFlags ? [],
stdenv, lib, cmake, pkg-config, gmp, libuv, cadical, git, gnumake, bash, buildLeanPackage, writeShellScriptBin, runCommand, symlinkJoin, lndir, perl, gnused, darwin, llvmPackages, linkFarmFromDrvs,
... } @ args:
with builtins;
lib.warn "The Nix-based build is deprecated" rec {
inherit stdenv;
sourceByRegex = p: rs: lib.sourceByRegex p (map (r: "(/src/)?${r}") rs);
buildCMake = args: stdenv.mkDerivation ({
nativeBuildInputs = [ cmake pkg-config ];
buildInputs = [ gmp libuv llvmPackages.llvm ];
# https://github.com/NixOS/nixpkgs/issues/60919
hardeningDisable = [ "all" ];
dontStrip = (args.debug or debug);
postConfigure = ''
patchShebangs .
'';
} // args // {
src = args.realSrc or (sourceByRegex args.src [ "[a-z].*" "CMakeLists\.txt" ]);
cmakeFlags = ["-DSMALL_ALLOCATOR=ON" "-DUSE_MIMALLOC=OFF"] ++ (args.cmakeFlags or [ "-DSTAGE=1" "-DPREV_STAGE=./faux-prev-stage" "-DUSE_GITHASH=OFF" "-DCADICAL=${cadical}/bin/cadical" ]) ++ (args.extraCMakeFlags or extraCMakeFlags) ++ lib.optional (args.debug or debug) [ "-DCMAKE_BUILD_TYPE=Debug" ];
preConfigure = args.preConfigure or "" + ''
# ignore absence of submodule
sed -i 's!lake/Lake.lean!!' CMakeLists.txt
'';
});
lean-bin-tools-unwrapped = buildCMake {
name = "lean-bin-tools";
outputs = [ "out" "leanc_src" ];
realSrc = sourceByRegex (src + "/src") [ "CMakeLists\.txt" "[a-z].*" ".*\.in" "Leanc\.lean" ];
dontBuild = true;
installPhase = ''
mkdir $out $leanc_src
mv bin/ include/ share/ $out/
mv leanc.sh $out/bin/leanc
mv leanc/Leanc.lean $leanc_src/
substituteInPlace $out/bin/leanc --replace '$root' "$out" --replace " sed " " ${gnused}/bin/sed "
substituteInPlace $out/bin/leanmake --replace "make" "${gnumake}/bin/make"
substituteInPlace $out/share/lean/lean.mk --replace "/usr/bin/env bash" "${bash}/bin/bash"
'';
};
leancpp = buildCMake {
name = "leancpp";
src = src + "/src";
buildFlags = [ "leancpp" "leanrt" "leanrt_initial-exec" "leanshell" "leanmain" ];
installPhase = ''
mkdir -p $out
mv lib/ $out/
mv runtime/libleanrt_initial-exec.a $out/lib
'';
};
stage0 = args.stage0 or (buildCMake {
name = "lean-stage0";
realSrc = src + "/stage0/src";
debug = stage0debug;
cmakeFlags = [ "-DSTAGE=0" ];
extraCMakeFlags = [];
preConfigure = ''
ln -s ${src + "/stage0/stdlib"} ../stdlib
'';
installPhase = ''
mkdir -p $out/bin $out/lib/lean
mv bin/lean $out/bin/
mv lib/lean/*.{so,dylib} $out/lib/lean
'';
meta.mainProgram = "lean";
});
stage = { stage, prevStage, self }:
let
desc = "stage${toString stage}";
build = args: buildLeanPackage.override {
lean = prevStage;
leanc = lean-bin-tools-unwrapped;
# use same stage for retrieving dependencies
lean-leanDeps = stage0;
lean-final = self;
} ({
src = src + "/src";
roots = [ { mod = args.name; glob = "andSubmodules"; } ];
fullSrc = src;
srcPath = "$PWD/src:$PWD/src/lake";
inherit debug;
leanFlags = [ "-DwarningAsError=true" ];
} // args);
Init' = build { name = "Init"; deps = []; };
Std' = build { name = "Std"; deps = [ Init' ]; };
Lean' = build { name = "Lean"; deps = [ Std' ]; };
attachSharedLib = sharedLib: pkg: pkg // {
inherit sharedLib;
mods = mapAttrs (_: m: m // { inherit sharedLib; propagatedLoadDynlibs = []; }) pkg.mods;
};
in (all: all // all.lean) rec {
inherit (Lean) emacs-dev emacs-package vscode-dev vscode-package;
Init = attachSharedLib leanshared Init';
Std = attachSharedLib leanshared Std' // { allExternalDeps = [ Init ]; };
Lean = attachSharedLib leanshared Lean' // { allExternalDeps = [ Std ]; };
Lake = build {
name = "Lake";
sharedLibName = "Lake_shared";
src = src + "/src/lake";
deps = [ Init Lean ];
};
Lake-Main = build {
name = "LakeMain";
roots = [{ glob = "one"; mod = "LakeMain"; }];
executableName = "lake";
deps = [ Lake ];
linkFlags = lib.optional stdenv.isLinux "-rdynamic";
src = src + "/src/lake";
};
stdlib = [ Init Std Lean Lake ];
modDepsFiles = symlinkJoin { name = "modDepsFiles"; paths = map (l: l.modDepsFile) (stdlib ++ [ Leanc ]); };
depRoots = symlinkJoin { name = "depRoots"; paths = map (l: l.depRoots) stdlib; };
iTree = symlinkJoin { name = "ileans"; paths = map (l: l.iTree) stdlib; };
Leanc = build { name = "Leanc"; src = lean-bin-tools-unwrapped.leanc_src; deps = stdlib; roots = [ "Leanc" ]; };
stdlibLinkFlags = "${lib.concatMapStringsSep " " (l: "-L${l.staticLib}") stdlib} -L${leancpp}/lib/lean";
libInit_shared = runCommand "libInit_shared" { buildInputs = [ stdenv.cc ]; libName = "libInit_shared${stdenv.hostPlatform.extensions.sharedLibrary}"; } ''
mkdir $out
touch empty.c
${stdenv.cc}/bin/cc -shared -o $out/$libName empty.c
'';
leanshared_1 = runCommand "leanshared_1" { buildInputs = [ stdenv.cc ]; libName = "leanshared_1${stdenv.hostPlatform.extensions.sharedLibrary}"; } ''
mkdir $out
touch empty.c
${stdenv.cc}/bin/cc -shared -o $out/$libName empty.c
'';
leanshared = runCommand "leanshared" { buildInputs = [ stdenv.cc ]; libName = "libleanshared${stdenv.hostPlatform.extensions.sharedLibrary}"; } ''
mkdir $out
LEAN_CC=${stdenv.cc}/bin/cc ${lean-bin-tools-unwrapped}/bin/leanc -shared ${lib.optionalString stdenv.isLinux "-Wl,-Bsymbolic"} \
-Wl,--whole-archive ${leancpp}/lib/temp/libleanshell.a -lInit -lStd -lLean -lleancpp ${leancpp}/lib/libleanrt_initial-exec.a -Wl,--no-whole-archive -lstdc++ \
-lm ${stdlibLinkFlags} \
$(${llvmPackages.libllvm.dev}/bin/llvm-config --ldflags --libs) \
-o $out/$libName
'';
mods = foldl' (mods: pkg: mods // pkg.mods) {} stdlib;
print-paths = Lean.makePrintPathsFor [] mods;
leanc = writeShellScriptBin "leanc" ''
LEAN_CC=${stdenv.cc}/bin/cc ${Leanc.executable}/bin/leanc -I${lean-bin-tools-unwrapped}/include ${stdlibLinkFlags} -L${libInit_shared} -L${leanshared_1} -L${leanshared} -L${Lake.sharedLib} "$@"
'';
lean = runCommand "lean" { buildInputs = lib.optional stdenv.isDarwin darwin.cctools; } ''
mkdir -p $out/bin
${leanc}/bin/leanc ${leancpp}/lib/temp/libleanmain.a ${libInit_shared}/* ${leanshared_1}/* ${leanshared}/* -o $out/bin/lean
'';
# derivation following the directory layout of the "basic" setup, mostly useful for running tests
lean-all = stdenv.mkDerivation {
name = "lean-${desc}";
buildCommand = ''
mkdir -p $out/bin $out/lib/lean
ln -sf ${leancpp}/lib/lean/* ${lib.concatMapStringsSep " " (l: "${l.modRoot}/* ${l.staticLib}/*") (lib.reverseList stdlib)} ${libInit_shared}/* ${leanshared_1}/* ${leanshared}/* ${Lake.sharedLib}/* $out/lib/lean/
# put everything in a single final derivation so `IO.appDir` references work
cp ${lean}/bin/lean ${leanc}/bin/leanc ${Lake-Main.executable}/bin/lake $out/bin
# NOTE: `lndir` will not override existing `bin/leanc`
${lndir}/bin/lndir -silent ${lean-bin-tools-unwrapped} $out
'';
meta.mainProgram = "lean";
};
cacheRoots = linkFarmFromDrvs "cacheRoots" ([
stage0 lean leanc lean-all iTree modDepsFiles depRoots Leanc.src
] ++ map (lib: lib.oTree) stdlib);
test = buildCMake {
name = "lean-test-${desc}";
realSrc = lib.sourceByRegex src [ "src.*" "tests.*" ];
buildInputs = [ gmp libuv perl git cadical ];
preConfigure = ''
cd src
'';
extraCMakeFlags = [ "-DLLVM=OFF" ];
postConfigure = ''
patchShebangs ../../tests ../lake
rm -r bin lib include share
ln -sf ${lean-all}/* .
'';
buildPhase = ''
ctest --output-junit test-results.xml --output-on-failure -E 'leancomptest_(doc_example|foreign)|leanlaketest_reverse-ffi|leanruntest_timeIO' -j$NIX_BUILD_CORES
'';
installPhase = ''
mkdir $out
mv test-results.xml $out
'';
};
update-stage0 =
let cTree = symlinkJoin { name = "cs"; paths = map (lib: lib.cTree) (stdlib ++ [Lake-Main]); }; in
writeShellScriptBin "update-stage0" ''
CSRCS=${cTree} CP_C_PARAMS="--dereference --no-preserve=all" ${src + "/script/lib/update-stage0"}
'';
update-stage0-commit = writeShellScriptBin "update-stage0-commit" ''
set -euo pipefail
${update-stage0}/bin/update-stage0
git commit -m "chore: update stage0"
'';
link-ilean = writeShellScriptBin "link-ilean" ''
dest=''${1:-src}
rm -rf $dest/build/lib || true
mkdir -p $dest/build/lib
ln -s ${iTree}/* $dest/build/lib
'';
benchmarks =
let
entries = attrNames (readDir (src + "/tests/bench"));
leanFiles = map (n: elemAt n 0) (filter (n: n != null) (map (match "(.*)\.lean") entries));
in lib.genAttrs leanFiles (n: (buildLeanPackage {
name = n;
src = filterSource (e: _: baseNameOf e == "${n}.lean") (src + "/tests/bench");
}).executable);
};
stage1 = stage { stage = 1; prevStage = stage0; self = stage1; };
stage2 = stage { stage = 2; prevStage = stage1; self = stage2; };
stage3 = stage { stage = 3; prevStage = stage2; self = stage3; };
}

247
nix/buildLeanPackage.nix Normal file
View File

@@ -0,0 +1,247 @@
{ lean, lean-leanDeps ? lean, lean-final ? lean, leanc,
stdenv, lib, coreutils, gnused, writeShellScriptBin, bash, substituteAll, symlinkJoin, linkFarmFromDrvs,
runCommand, darwin, mkShell, ... }:
let lean-final' = lean-final; in
lib.makeOverridable (
{ name, src, fullSrc ? src, srcPrefix ? "", srcPath ? "$PWD/${srcPrefix}",
# Lean dependencies. Each entry should be an output of buildLeanPackage.
deps ? [ lean.Init lean.Std lean.Lean ],
# Static library dependencies. Each derivation `static` should contain a static library in the directory `${static}`.
staticLibDeps ? [],
# Whether to wrap static library inputs in a -Wl,--start-group [...] -Wl,--end-group to ensure dependencies are resolved.
groupStaticLibs ? false,
# Shared library dependencies included at interpretation with --load-dynlib and linked to. Each derivation `shared` should contain a
# shared library at the path `${shared}/${shared.libName or shared.name}` and a name to link to like `-l${shared.linkName or shared.name}`.
# These libs are also linked to in packages that depend on this one.
nativeSharedLibs ? [],
# Lean modules to include.
# A set of Lean modules names as strings (`"Foo.Bar"`) or attrsets (`{ name = "Foo.Bar"; glob = "one" | "submodules" | "andSubmodules"; }`);
# see Lake README for glob meanings. Dependencies of selected modules are always included.
roots ? [ name ],
# Output from `lean --deps-json` on package source files. Persist the corresponding output attribute to a file and pass it back in here to avoid IFD.
# Must be refreshed on any change in `import`s or set of source file names.
modDepsFile ? null,
# Whether to compile each module into a native shared library that is loaded whenever the module is imported in order to accelerate evaluation
precompileModules ? false,
# Whether to compile the package into a native shared library that is loaded whenever *any* of the package's modules is imported into another package.
# If `precompileModules` is also `true`, the latter only affects imports within the current package.
precompilePackage ? precompileModules,
# Lean plugin dependencies. Each derivation `plugin` should contain a plugin library at path `${plugin}/${plugin.name}`.
pluginDeps ? [],
# `overrideAttrs` for `buildMod`
overrideBuildModAttrs ? null,
debug ? false, leanFlags ? [], leancFlags ? [], linkFlags ? [], executableName ? lib.toLower name, libName ? name, sharedLibName ? libName,
srcTarget ? "..#stage0", srcArgs ? "(\${args[*]})", lean-final ? lean-final' }@args:
with builtins; let
# "Init.Core" ~> "Init/Core"
modToPath = mod: replaceStrings ["."] ["/"] mod;
modToAbsPath = mod: "${src}/${modToPath mod}";
# sanitize file name before copying to store, except when already in store
copyToStoreSafe = base: suffix: if lib.isDerivation base then base + suffix else
builtins.path { name = lib.strings.sanitizeDerivationName (baseNameOf suffix); path = base + suffix; };
modToLean = mod: copyToStoreSafe src "/${modToPath mod}.lean";
bareStdenv = ./bareStdenv;
mkBareDerivation = args: derivation (args // {
name = lib.strings.sanitizeDerivationName args.name;
stdenv = bareStdenv;
inherit (stdenv) system;
buildInputs = (args.buildInputs or []) ++ [ coreutils ];
builder = stdenv.shell;
args = [ "-c" ''
source $stdenv/setup
set -u
${args.buildCommand}
'' ];
}) // { overrideAttrs = f: mkBareDerivation (lib.fix (lib.extends f (_: args))); };
runBareCommand = name: args: buildCommand: mkBareDerivation (args // { inherit name buildCommand; });
runBareCommandLocal = name: args: buildCommand: runBareCommand name (args // {
preferLocalBuild = true;
allowSubstitutes = false;
}) buildCommand;
mkSharedLib = name: args: runBareCommand "${name}-dynlib" {
buildInputs = [ stdenv.cc ] ++ lib.optional stdenv.isDarwin darwin.cctools;
libName = "${name}${stdenv.hostPlatform.extensions.sharedLibrary}";
} ''
mkdir -p $out
${leanc}/bin/leanc -shared ${args} -o $out/$libName
'';
depRoot = name: deps: mkBareDerivation {
name = "${name}-depRoot";
inherit deps;
depRoots = map (drv: drv.LEAN_PATH) deps;
passAsFile = [ "deps" "depRoots" ];
buildCommand = ''
mkdir -p $out
for i in $(cat $depRootsPath); do
cp -dru --no-preserve=mode $i/. $out
done
for i in $(cat $depsPath); do
cp -drsu --no-preserve=mode $i/. $out
done
'';
};
srcRoot = src;
# A flattened list of Lean-module dependencies (`deps`)
allExternalDeps = lib.unique (lib.foldr (dep: allExternalDeps: allExternalDeps ++ [ dep ] ++ dep.allExternalDeps) [] deps);
allNativeSharedLibs =
lib.unique (lib.flatten (nativeSharedLibs ++ (map (dep: dep.allNativeSharedLibs or []) allExternalDeps)));
# A flattened list of all static library dependencies: this and every dep module's explicitly provided `staticLibDeps`,
# plus every dep module itself: `dep.staticLib`
allStaticLibDeps =
lib.unique (lib.flatten (staticLibDeps ++ (map (dep: [dep.staticLib] ++ dep.staticLibDeps or []) allExternalDeps)));
pathOfSharedLib = dep: dep.libPath or "${dep}/${dep.libName or dep.name}";
leanPluginFlags = lib.concatStringsSep " " (map (dep: "--plugin=${pathOfSharedLib dep}") pluginDeps);
loadDynlibsOfDeps = deps: lib.unique (concatMap (d: d.propagatedLoadDynlibs) deps);
# submodules "Init" = ["Init.List.Basic", "Init.Core", ...]
submodules = mod: let
dir = readDir (modToAbsPath mod);
f = p: t:
if t == "directory" then
submodules "${mod}.${p}"
else
let m = builtins.match "(.*)\.lean" p;
in lib.optional (m != null) "${mod}.${head m}";
in concatLists (lib.mapAttrsToList f dir);
# conservatively approximate list of source files matched by glob
expandGlobAllApprox = g:
if typeOf g == "string" then
# we can't know the required files without parsing dependencies (which is what we want this
# function for), so we approximate to the entire package.
let root = (head (split "\\." g));
in lib.optional (pathExists (src + "/${modToPath root}.lean")) root ++ lib.optionals (pathExists (modToAbsPath root)) (submodules root)
else if g.glob == "one" then expandGlobAllApprox g.mod
else if g.glob == "submodules" then submodules g.mod
else if g.glob == "andSubmodules" then [g.mod] ++ submodules g.mod
else throw "unknown glob kind '${g}'";
# list of modules that could potentially be involved in the build
candidateMods = lib.unique (concatMap expandGlobAllApprox roots);
candidateFiles = map modToLean candidateMods;
modDepsFile = args.modDepsFile or mkBareDerivation {
name = "${name}-deps.json";
candidateFiles = lib.concatStringsSep " " candidateFiles;
passAsFile = [ "candidateFiles" ];
buildCommand = ''
mkdir $out
${lean-leanDeps}/bin/lean --deps-json --stdin < $candidateFilesPath > $out/$name
'';
};
modDeps = fromJSON (
# the only possible references to store paths in the JSON should be inside errors, so no chance of missed dependencies from this
unsafeDiscardStringContext (readFile "${modDepsFile}/${modDepsFile.name}"));
# map from module name to list of imports
modDepsMap = listToAttrs (lib.zipListsWith lib.nameValuePair candidateMods modDeps.imports);
maybeOverrideAttrs = f: x: if f != null then x.overrideAttrs f else x;
# build module (.olean and .c) given derivations of all (immediate) dependencies
# TODO: make `rec` parts override-compatible?
buildMod = mod: deps: maybeOverrideAttrs overrideBuildModAttrs (mkBareDerivation rec {
name = "${mod}";
LEAN_PATH = depRoot mod deps;
LEAN_ABORT_ON_PANIC = "1";
relpath = modToPath mod;
buildInputs = [ lean ];
leanPath = relpath + ".lean";
# should be either single .lean file or directory directly containing .lean file plus dependencies
src = copyToStoreSafe srcRoot ("/" + leanPath);
outputs = [ "out" "ilean" "c" ];
oleanPath = relpath + ".olean";
ileanPath = relpath + ".ilean";
cPath = relpath + ".c";
inherit leanFlags leanPluginFlags;
leanLoadDynlibFlags = map (p: "--load-dynlib=${pathOfSharedLib p}") (loadDynlibsOfDeps deps);
buildCommand = ''
dir=$(dirname $relpath)
mkdir -p $dir $out/$dir $ilean/$dir $c/$dir
if [ -d $src ]; then cp -r $src/. .; else cp $src $leanPath; fi
lean -o $out/$oleanPath -i $out/$ileanPath -c $c/$cPath $leanPath $leanFlags $leanPluginFlags $leanLoadDynlibFlags
'';
}) // {
inherit deps;
propagatedLoadDynlibs = loadDynlibsOfDeps deps;
};
compileMod = mod: drv: mkBareDerivation {
name = "${mod}-cc";
buildInputs = [ leanc stdenv.cc ];
hardeningDisable = [ "all" ];
oPath = drv.relpath + ".o";
inherit leancFlags;
buildCommand = ''
mkdir -p $out/$(dirname ${drv.relpath})
# make local "copy" so `drv`'s Nix store path doesn't end up in ccache's hash
ln -s ${drv.c}/${drv.cPath} src.c
# on the other hand, a debug build is pretty fast anyway, so preserve the path for gdb
leanc -c -o $out/$oPath $leancFlags -fPIC ${if debug then "${drv.c}/${drv.cPath} -g" else "src.c -O3 -DNDEBUG -DLEAN_EXPORTING"}
'';
};
mkMod = mod: deps:
let drv = buildMod mod deps;
obj = compileMod mod drv;
# this attribute will only be used if any dependent module is precompiled
sharedLib = mkSharedLib mod "${obj}/${obj.oPath} ${lib.concatStringsSep " " (map (d: pathOfSharedLib d.sharedLib) deps)}";
in drv // {
inherit obj sharedLib;
} // lib.optionalAttrs precompileModules {
propagatedLoadDynlibs = [sharedLib];
};
externalModMap = lib.foldr (dep: depMap: depMap // dep.mods) {} allExternalDeps;
# map from module name to derivation
modCandidates = mapAttrs (mod: header:
let
deps = if header.errors == []
then map (m: m.module) header.result.imports
else abort "errors while parsing imports of ${mod}:\n${lib.concatStringsSep "\n" header.errors}";
in mkMod mod (map (dep: if modDepsMap ? ${dep} then modCandidates.${dep} else externalModMap.${dep}) deps)) modDepsMap;
expandGlob = g:
if typeOf g == "string" then [g]
else if g.glob == "one" then [g.mod]
else if g.glob == "submodules" then submodules g.mod
else if g.glob == "andSubmodules" then [g.mod] ++ submodules g.mod
else throw "unknown glob kind '${g}'";
# subset of `modCandidates` that is transitively reachable from `roots`
mods' = listToAttrs (map (e: { name = e.key; value = modCandidates.${e.key}; }) (genericClosure {
startSet = map (m: { key = m; }) (concatMap expandGlob roots);
operator = e: if modDepsMap ? ${e.key} then map (m: { key = m.module; }) (filter (m: modCandidates ? ${m.module}) modDepsMap.${e.key}.result.imports) else [];
}));
allLinkFlags = lib.foldr (shared: acc: acc ++ [ "-L${shared}" "-l${shared.linkName or shared.name}" ]) linkFlags allNativeSharedLibs;
objects = mapAttrs (_: m: m.obj) mods';
bintools = if stdenv.isDarwin then darwin.cctools else stdenv.cc.bintools.bintools;
staticLib = runCommand "${name}-lib" { buildInputs = [ bintools ]; } ''
mkdir -p $out
ar Trcs $out/lib${libName}.a ${lib.concatStringsSep " " (map (drv: "${drv}/${drv.oPath}") (attrValues objects))};
'';
staticLibLinkWrapper = libs: if groupStaticLibs && !stdenv.isDarwin
then "-Wl,--start-group ${libs} -Wl,--end-group"
else "${libs}";
in rec {
inherit name lean deps staticLibDeps allNativeSharedLibs allLinkFlags allExternalDeps src objects staticLib modDepsFile;
mods = mapAttrs (_: m:
m //
# if neither precompilation option was set but a dependent module wants to be precompiled, default to precompiling this package whole
lib.optionalAttrs (precompilePackage || !precompileModules) { inherit sharedLib; } //
lib.optionalAttrs precompilePackage { propagatedLoadDynlibs = [sharedLib]; })
mods';
modRoot = depRoot name (attrValues mods);
depRoots = linkFarmFromDrvs "depRoots" (map (m: m.LEAN_PATH) (attrValues mods));
cTree = symlinkJoin { name = "${name}-cTree"; paths = map (mod: mod.c) (attrValues mods); };
oTree = symlinkJoin { name = "${name}-oTree"; paths = (attrValues objects); };
iTree = symlinkJoin { name = "${name}-iTree"; paths = map (mod: mod.ilean) (attrValues mods); };
sharedLib = mkSharedLib "lib${sharedLibName}" ''
${if stdenv.isDarwin then "-Wl,-force_load,${staticLib}/lib${libName}.a" else "-Wl,--whole-archive ${staticLib}/lib${libName}.a -Wl,--no-whole-archive"} \
${lib.concatStringsSep " " (map (d: "${d.sharedLib}/*") deps)}'';
executable = lib.makeOverridable ({ withSharedStdlib ? true }: let
objPaths = map (drv: "${drv}/${drv.oPath}") (attrValues objects) ++ lib.optional withSharedStdlib "${lean-final.leanshared}/*";
in runCommand executableName { buildInputs = [ stdenv.cc leanc ]; } ''
mkdir -p $out/bin
leanc ${staticLibLinkWrapper (lib.concatStringsSep " " (objPaths ++ map (d: "${d}/*.a") allStaticLibDeps))} \
-o $out/bin/${executableName} \
${lib.concatStringsSep " " allLinkFlags}
'') {};
})

42
nix/lake-dev.in Normal file
View File

@@ -0,0 +1,42 @@
#!@bash@/bin/bash
set -euo pipefail
function pebkac() {
echo 'This is just a simple Nix adapter for `lake print-paths|serve`.'
exit 1
}
[[ $# -gt 0 ]] || pebkac
case $1 in
--version)
# minimum version for `lake serve` with fallback
echo 3.1.0
;;
print-paths)
shift
deps="$@"
root=.
# fall back to initial package if not in package
[[ ! -f "$root/flake.nix" ]] && root="@srcRoot@"
target="$root#print-paths"
args=()
# HACK: use stage 0 instead of 1 inside Lean's own `src/`
[[ -d Lean && -f ../flake.nix ]] && target="@srcTarget@print-paths" && args=@srcArgs@
for dep in $deps; do
target="$target.\"$dep\""
done
echo "Building dependencies..." >&2
# -v only has "built ...", but "-vv" is a bit too verbose
exec @nix@/bin/nix run "$target" ${args[*]} -v
;;
serve)
shift
[[ ${1:-} == "--" ]] && shift
# `link-ilean` puts them there
LEAN_PATH=${LEAN_PATH:+$LEAN_PATH:}$PWD/build/lib exec $(dirname $0)/lean --server "$@"
;;
*)
pebkac
;;
esac

28
nix/lean-dev.in Normal file
View File

@@ -0,0 +1,28 @@
#!@bash@/bin/bash
set -euo pipefail
root="."
# find package root
while [[ "$root" != / ]]; do
[ -f "$root/flake.nix" ] && break
root="$(realpath "$root/..")"
done
# fall back to initial package if not in package
[[ ! -f "$root/flake.nix" ]] && root="@srcRoot@"
# use Lean w/ package unless in server mode (which has its own LEAN_PATH logic)
target="$root#lean-package"
for arg in "$@"; do
case $arg in
--server | --worker | -v | --version)
target="$root#lean"
;;
esac
done
args=(-- "$@")
# HACK: use stage 0 instead of 1 inside Lean's own `src/`
[[ -d Lean && -f ../flake.nix ]] && target="@srcTarget@" && args=@srcArgs@
LEAN_SYSROOT="$(dirname "$0")/.." exec @nix@/bin/nix ${LEAN_NIX_ARGS:-} run "$target" ${args[*]}

52
nix/packages.nix Normal file
View File

@@ -0,0 +1,52 @@
{ src, pkgs, ... } @ args:
with pkgs;
let
# https://github.com/NixOS/nixpkgs/issues/130963
llvmPackages = if stdenv.isDarwin then llvmPackages_11 else llvmPackages_15;
cc = (ccacheWrapper.override rec {
cc = llvmPackages.clang;
extraConfig = ''
export CCACHE_DIR=/nix/var/cache/ccache
export CCACHE_UMASK=007
export CCACHE_BASE_DIR=$NIX_BUILD_TOP
# https://github.com/NixOS/nixpkgs/issues/109033
args=("$@")
for ((i=0; i<"''${#args[@]}"; i++)); do
case ''${args[i]} in
-frandom-seed=*) unset args[i]; break;;
esac
done
set -- "''${args[@]}"
[ -d $CCACHE_DIR ] || exec ${cc}/bin/$(basename "$0") "$@"
'';
}).overrideAttrs (old: {
# https://github.com/NixOS/nixpkgs/issues/119779
installPhase = builtins.replaceStrings ["use_response_file_by_default=1"] ["use_response_file_by_default=0"] old.installPhase;
});
stdenv' = if stdenv.isLinux then useGoldLinker stdenv else stdenv;
lean = callPackage (import ./bootstrap.nix) (args // {
stdenv = overrideCC stdenv' cc;
inherit src buildLeanPackage llvmPackages;
});
makeOverridableLeanPackage = f:
let newF = origArgs: f origArgs // {
overrideArgs = newArgs: makeOverridableLeanPackage f (origArgs // newArgs);
};
in lib.setFunctionArgs newF (lib.getFunctionArgs f) // {
override = args: makeOverridableLeanPackage (f.override args);
};
buildLeanPackage = makeOverridableLeanPackage (callPackage (import ./buildLeanPackage.nix) (args // {
inherit (lean) stdenv;
lean = lean.stage1;
inherit (lean.stage1) leanc;
}));
in {
inherit cc buildLeanPackage llvmPackages;
nixpkgs = pkgs;
ciShell = writeShellScriptBin "ciShell" ''
set -o pipefail
export PATH=${moreutils}/bin:$PATH
# prefix lines with cumulative and individual execution time
"$@" |& ts -i "(%.S)]" | ts -s "[%M:%S"
'';
} // lean.stage1

View File

@@ -0,0 +1 @@
#eval "Hello, world!"

View File

@@ -0,0 +1,21 @@
{
description = "My Lean package";
inputs.lean.url = "github:leanprover/lean4";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { self, lean, flake-utils }: flake-utils.lib.eachDefaultSystem (system:
let
leanPkgs = lean.packages.${system};
pkg = leanPkgs.buildLeanPackage {
name = "MyPackage"; # must match the name of the top-level .lean file
src = ./.;
};
in {
packages = pkg // {
inherit (leanPkgs) lean;
};
defaultPackage = pkg.modRoot;
});
}

View File

@@ -1,54 +0,0 @@
This release introduces the Lean module system, which allows files to
control the visibility of their contents for other files. In previous
releases, this feature was available as a preview when the option
`experimental.module` was set to `true`; it is now a fully supported
feature of Lean.
# Benefits
Because modules reduce the amount of information exposed to other
code, they speed up rebuilds because irrelevant changes can be
ignored, they make it possible to be deliberate about API evolution by
hiding details that may change from clients, they help proofs be
checked faster by avoiding accidentally unfolding definitions, and
they lead to smaller executable files through improved dead code
elimination.
# Visibility
A source file is a module if it begins with the `module` keyword. By
default, declarations in a module are private; the `public` modifier
exports them. Proofs of theorems and bodies of definitions are private
by default even when their signatures are public; the bodies of
definitions can be made public by adding the `@[expose]`
attribute. Theorems and opaque constants never expose their bodies.
`public section` and `@[expose] section` change the default visibility
of declarations in the section.
# Imports
Modules may only import other modules. By default, `import` adds the
public information of the imported module to the private scope of the
current module. Adding the `public` modifier to an import places the
imported modules's public information in the public scope of the
current module, exposing it in turn to the current module's clients.
Within a package, `import all` can be used to import another module's
private scope into the current module; this can be used to separate
lemmas or tests from definition modules without exposing details to
downstream clients.
# Meta Code
Code used in metaprograms must be marked `meta`. This ensures that the
code is compiled and available for execution when it is needed during
elaboration. Meta code may only reference other meta code. A whole
module can be made available in the meta phase using `meta import`;
this allows code to be shared across phases by importing the module in
each phase. Code that is reachable from public metaprograms must be
imported via `public meta import`, while local metaprograms can use
plain `meta import` for their dependencies.
The module system is described in detail in [the Lean language reference](https://lean-reference-manual-review.netlify.app/find/?domain=Verso.Genre.Manual.section&name=files).

View File

@@ -1,75 +0,0 @@
import Lake.CLI.Main
/-!
Usage: `lean --run script/Modulize.lean [--meta] file1.lean file2.lean ...`
A simple script that inserts `module` and `public section` into un-modulized files and
bumps their imports to `public`.
When `--meta` is passed, `public meta section` and `public meta import` is used instead.
-/
open Lean Parser.Module
def main (args : List String) : IO Unit := do
let mut args := args
let mut doMeta := false
while !args.isEmpty && args[0]!.startsWith "-" do
match args[0]! with
| "--meta" => doMeta := true
| arg => throw <| .userError s!"unknown flag '{arg}'"
args := args.tail
for path in args do
-- Parse the input file
let mut text IO.FS.readFile path
let inputCtx := Parser.mkInputContext text path
let (header, parserState, msgs) Parser.parseHeader inputCtx
if !msgs.toList.isEmpty then -- skip this file if there are parse errors
msgs.forM fun msg => msg.toString >>= IO.println
throw <| .userError "parse errors in file"
let `(header| $[module%$moduleTk?]? $imps:import*) := header
| throw <| .userError s!"unexpected header syntax of {path}"
if moduleTk?.isSome then
continue
-- initial whitespace if empty header
let startPos := header.raw.getPos? |>.getD parserState.pos
let dummyEnv mkEmptyEnvironment
let (initCmd, parserState', _) :=
Parser.parseCommand inputCtx { env := dummyEnv, options := {} } parserState msgs
-- insert section if any trailing command
if !initCmd.isOfKind ``Parser.Command.eoi then
let insertPos? :=
-- put below initial module docstring if any
guard (initCmd.isOfKind ``Parser.Command.moduleDoc) *> initCmd.getTailPos? <|>
-- else below header
header.raw.getTailPos?
let insertPos := insertPos?.getD startPos -- empty header
let mut sec := if doMeta then
"public meta section"
else
"@[expose] public section"
if !imps.isEmpty then
sec := "\n\n" ++ sec
if insertPos?.isNone then
sec := sec ++ "\n\n"
text := text.extract 0 insertPos ++ sec ++ text.extract insertPos text.rawEndPos
-- prepend each import with `public `
for imp in imps.reverse do
let insertPos := imp.raw.getPos?.get!
let prfx := if doMeta then "public meta " else "public "
text := text.extract 0 insertPos ++ prfx ++ text.extract insertPos text.rawEndPos
-- insert `module` header
let mut initText := text.extract 0 startPos
if !initText.trim.isEmpty then
-- If there is a header comment, preserve it and put `module` in the line after
initText := initText.trimRight ++ "\n"
text := initText ++ "module\n\n" ++ text.extract startPos text.rawEndPos
IO.FS.writeFile path text

View File

@@ -1,821 +0,0 @@
/-
Copyright (c) 2023 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro, Sebastian Ullrich
-/
module
import Lean.Environment
import Lean.ExtraModUses
import Lake.CLI.Main
import Lean.Parser.Module
import Lake.Load.Workspace
/-! # Shake: A Lean import minimizer
This command will check the current project (or a specified target module) and all dependencies for
unused imports. This works by looking at generated `.olean` files to deduce required imports and
ensuring that every import is used to contribute some constant or other elaboration dependency
recorded by `recordExtraModUse` and friends.
-/
/-- help string for the command line interface -/
def help : String := "Lean project tree shaking tool
Usage: lake exe shake [OPTIONS] <MODULE>..
Arguments:
<MODULE>
A module path like `Mathlib`. All files transitively reachable from the
provided module(s) will be checked.
Options:
--force
Skips the `lake build --no-build` sanity check
--keep-implied
Preserves existing imports that are implied by other imports and thus not technically needed
anymore
--keep-prefix
If an import `X` would be replaced in favor of a more specific import `X.Y...` it implies,
preserves the original import instead. More generally, prefers inserting `import X` even if it
was not part of the original imports as long as it was in the original transitive import closure
of the current module.
--keep-public
Preserves all `public` imports to avoid breaking changes for external downstream modules
--add-public
Adds new imports as `public` if they have been in the original public closure of that module.
In other words, public imports will not be removed from a module unless they are unused even
in the private scope, and those that are removed will be re-added as `public` in downstream
modules even if only needed in the private scope there. Unlike `--keep-public`, this may
introduce breaking changes but will still limit the number of inserted imports.
--explain
Gives constants explaining why each module is needed
--fix
Apply the suggested fixes directly. Make sure you have a clean checkout
before running this, so you can review the changes.
--gh-style
Outputs messages that can be parsed by `gh-problem-matcher-wrap`
Annotations:
The following annotations can be added to Lean files in order to configure the behavior of
`shake`. Only the substring `shake: ` directly followed by a directive is checked for, so multiple
directives can be mixed in one line such as `-- shake: keep-downstream, shake: keep-all`, and they
can be surrounded by arbitrary comments such as `-- shake: keep (metaprogram output dependency)`.
* `module -- shake: keep-downstream`:
Preserves this module in all (current) downstream modules, adding new imports of it if needed.
* `module -- shake: keep-all`:
Preserves all existing imports in this module as is. New imports now needed because of upstream
changes may still be added.
* `import X -- shake: keep`:
Preserves this specific import in the current module. The most common use case is to preserve a
public import that will be needed in downstream modules to make sense of the output of a
metaprogram defined in this module. For example, if a tactic is defined that may synthesize a
reference to a theorem when run, there is no way for `shake` to detect this by itself and the
module of that theorem should be publicly imported and annotated with `keep` in the tactic's
module.
```
public import X -- shake: keep (metaprogram output dependency)
...
elab \"my_tactic\" : tactic => do
... mkConst ``f -- `f`, defined in `X`, may appear in the output of this tactic
```
"
open Lean
/-- The parsed CLI arguments. See `help` for more information -/
structure Args where
help : Bool := false
keepImplied : Bool := false
keepPrefix : Bool := false
keepPublic : Bool := false
addPublic : Bool := false
force : Bool := false
githubStyle : Bool := false
explain : Bool := false
trace : Bool := false
fix : Bool := false
/-- `<MODULE>..`: the list of root modules to check -/
mods : Array Name := #[]
/-- We use `Nat` as a bitset for doing efficient set operations.
The bit indexes will usually be a module index. -/
structure Bitset where
toNat : Nat
deriving Inhabited, DecidableEq, Repr
namespace Bitset
instance : EmptyCollection Bitset where
emptyCollection := { toNat := 0 }
instance : Insert Nat Bitset where
insert i s := { toNat := s.toNat ||| (1 <<< i) }
instance : Singleton Nat Bitset where
singleton i := insert i
instance : Inter Bitset where
inter a b := { toNat := a.toNat &&& b.toNat }
instance : Union Bitset where
union a b := { toNat := a.toNat ||| b.toNat }
instance : XorOp Bitset where
xor a b := { toNat := a.toNat ^^^ b.toNat }
def has (s : Bitset) (i : Nat) : Bool := s {i}
end Bitset
/-- The kind of a module dependency, corresponding to the homonymous `ExtraModUse` fields. -/
structure NeedsKind where
isExported : Bool
isMeta : Bool
deriving Inhabited, BEq, Repr, Hashable
namespace NeedsKind
@[match_pattern] abbrev priv : NeedsKind := { isExported := false, isMeta := false }
@[match_pattern] abbrev pub : NeedsKind := { isExported := true, isMeta := false }
@[match_pattern] abbrev metaPriv : NeedsKind := { isExported := false, isMeta := true }
@[match_pattern] abbrev metaPub : NeedsKind := { isExported := true, isMeta := true }
def all : Array NeedsKind := #[pub, priv, metaPub, metaPriv]
def ofImport : Lean.Import NeedsKind
| { isExported := true, isMeta := true, .. } => .metaPub
| { isExported := true, isMeta := false, .. } => .pub
| { isExported := false, isMeta := true, .. } => .metaPriv
| { isExported := false, isMeta := false, .. } => .priv
end NeedsKind
/-- Logically, a map `NeedsKind → Set ModuleIdx`, or `Set Import`. -/
structure Needs where
pub : Bitset
priv : Bitset
metaPub : Bitset
metaPriv : Bitset
deriving Inhabited, Repr
def Needs.empty : Needs := default
def Needs.get (needs : Needs) (k : NeedsKind) : Bitset :=
match k with
| .pub => needs.pub
| .priv => needs.priv
| .metaPub => needs.metaPub
| .metaPriv => needs.metaPriv
def Needs.has (needs : Needs) (k : NeedsKind) (i : ModuleIdx) : Bool :=
needs.get k |>.has i
def Needs.set (needs : Needs) (k : NeedsKind) (s : Bitset) : Needs :=
match k with
| .pub => { needs with pub := s }
| .priv => { needs with priv := s }
| .metaPub => { needs with metaPub := s }
| .metaPriv => { needs with metaPriv := s }
def Needs.modify (needs : Needs) (k : NeedsKind) (f : Bitset Bitset) : Needs :=
needs.set k (f (needs.get k))
def Needs.union (needs : Needs) (k : NeedsKind) (s : Bitset) : Needs :=
needs.modify k (· s)
def Needs.sub (needs : Needs) (k : NeedsKind) (s : Bitset) : Needs :=
needs.modify k (fun s' => s' ^^^ (s' s))
instance : Union Needs where
union a b := {
pub := a.pub b.pub
priv := a.priv b.priv
metaPub := a.metaPub b.metaPub
metaPriv := a.metaPriv b.metaPriv }
/-- The list of edits that will be applied in `--fix`. `edits[i] = (removed, added)` where:
* If `j ∈ removed` then we want to delete module named `j` from the imports of `i`
* If `j ∈ added` then we want to add module index `j` to the imports of `i`.
-/
abbrev Edits := Std.HashMap Name (Array Import × Array Import)
/-- The main state of the checker, containing information on all loaded modules. -/
structure State where
env : Environment
/--
`transDeps[i]` is the (non-reflexive) transitive closure of `mods[i].imports`. More specifically,
* `j ∈ transDeps[i].pub` if `i -(public import)->+ j`
* `j ∈ transDeps[i].priv` if `i -(import ...)-> _ -(public import)->* j`
* `j ∈ transDeps[i].priv` if `i -(import all)->+ i'` and `j ∈ transDeps[i'].pub/priv`
* `j ∈ transDeps[i].metaPub` if `i -(public (meta)? import)->* _ -(public meta import)-> _ -(public (meta)? import)->* j`
* `j ∈ transDeps[i].metaPriv` if `i -(meta import ...)-> _ -(public (meta)? import)->* j`
* `j ∈ transDeps[i].metaPriv` if `i -(import ...)-> i'` and `j ∈ transDeps[i'].metaPub`
* `j ∈ transDeps[i].metaPriv` if `i -(import all)->+ i'` and `j ∈ transDeps[i'].metaPub/metaPriv`
-/
transDeps : Array Needs := #[]
/--
`transDepsOrig` is the initial value of `transDeps` before changes potentially resulting from
changes to upstream headers.
-/
transDepsOrig : Array Needs := #[]
/-- Modules that should always be preserved downstream. -/
preserve : Needs := default
/-- Edits to be applied to the module imports. -/
edits : Edits := {}
def State.mods (s : State) := s.env.header.moduleData
def State.modNames (s : State) := s.env.header.moduleNames
/--
Given module `j`'s transitive dependencies, computes the union of `transImps` and the transitive
dependencies resulting from importing the module via `imp` according to the rules of
`State.transDeps`.
-/
def addTransitiveImps (transImps : Needs) (imp : Import) (j : Nat) (impTransImps : Needs) : Needs := Id.run do
let mut transImps := transImps
-- `j ∈ transDeps[i].pub` if `i -(public import)->+ j`
if imp.isExported && !imp.isMeta then
transImps := transImps.union .pub {j} |>.union .pub (impTransImps.get .pub)
if !imp.isExported && !imp.isMeta then
-- `j ∈ transDeps[i].priv` if `i -(import ...)-> _ -(public import)->* j`
transImps := transImps.union .priv {j} |>.union .priv (impTransImps.get .pub)
if imp.importAll then
-- `j ∈ transDeps[i].priv` if `i -(import all)->+ i'` and `j ∈ transDeps[i'].pub/priv`
transImps := transImps.union .priv (impTransImps.get .pub impTransImps.get .priv)
-- `j ∈ transDeps[i].metaPub` if `i -(public (meta)? import)->* _ -(public meta import)-> _ -(public (meta)? import)->* j`
if imp.isExported then
transImps := transImps.union .metaPub (impTransImps.get .metaPub)
if imp.isMeta then
transImps := transImps.union .metaPub {j} |>.union .metaPub (impTransImps.get .pub impTransImps.get .metaPub)
if !imp.isExported then
if imp.isMeta then
-- `j ∈ transDeps[i].metaPriv` if `i -(meta import ...)-> _ -(public (meta)? import)->* j`
transImps := transImps.union .metaPriv {j} |>.union .metaPriv (impTransImps.get .pub impTransImps.get .metaPub)
if imp.importAll then
-- `j ∈ transDeps[i].metaPriv` if `i -(import all)->+ i'` and `j ∈ transDeps[i'].metaPub/metaPriv`
transImps := transImps.union .metaPriv (impTransImps.get .metaPub impTransImps.get .metaPriv)
else
-- `j ∈ transDeps[i].metaPriv` if `i -(import ...)-> i'` and `j ∈ transDeps[i'].metaPub`
transImps := transImps.union .metaPriv (impTransImps.get .metaPub)
transImps
def isDeclMeta' (env : Environment) (declName : Name) : Bool :=
-- Matchers are not compiled by themselves but inlined by the compiler, so there is no IR decl
-- to be tagged as `meta`.
-- TODO: It would be better to base the entire `meta` inference on the IR only and consider module
-- references from any other context as compatible with both phases.
let inferFor :=
if declName.isStr && (declName.getString!.startsWith "match_" || declName.getString! == "_unsafe_rec") then declName.getPrefix else declName
-- `isMarkedMeta` knows about non-defs such as `meta structure`, isDeclMeta knows about decls
-- implicitly marked meta
isMarkedMeta env inferFor || isDeclMeta env inferFor
/--
Given an `Expr` reference, returns the declaration name that should be considered the reference, if
any.
-/
def getDepConstName? (env : Environment) (ref : Name) : Option Name := do
-- Ignore references to reserved names, they can be re-generated in-place
guard <| !isReservedName env ref
-- `_simp_...` constants are similar, use base decl instead
return if ref.isStr && ref.getString!.startsWith "_simp_" then
ref.getPrefix
else
ref
/-- Calculates the needs for a given module `mod` from constants and recorded extra uses. -/
def calcNeeds (s : State) (i : ModuleIdx) : Needs := Id.run do
let env := s.env
let mut needs := default
for ci in env.header.moduleData[i]!.constants do
-- Added guard for cases like `structure` that are still exported even if private
let pubCI? := guard (!isPrivateName ci.name) *> (env.setExporting true).find? ci.name
let k := { isExported := pubCI?.isSome, isMeta := isDeclMeta' env ci.name }
needs := visitExpr k ci.type needs
if let some e := ci.value? (allowOpaque := true) then
-- type and value has identical visibility under `meta`
let k := if k.isMeta then k else
if pubCI?.any (·.hasValue (allowOpaque := true)) then .pub else .priv
needs := visitExpr k e needs
for use in getExtraModUses env i do
let j := env.getModuleIdx? use.module |>.get!
needs := needs.union { use with } {j}
return needs
where
/-- Accumulate the results from expression `e` into `deps`. -/
visitExpr (k : NeedsKind) (e : Expr) (deps : Needs) : Needs :=
let env := s.env
Lean.Expr.foldConsts e deps fun c deps => Id.run do
let mut deps := deps
if let some c := getDepConstName? env c then
if let some j := env.getModuleIdxFor? c then
let k := { k with isMeta := k.isMeta && !isDeclMeta' env c }
if j != i then
deps := deps.union k {j}
for indMod in (indirectModUseExt.getState env)[c]?.getD #[] do
if s.transDeps[i]!.has k indMod then
deps := deps.union k {indMod}
return deps
/--
Calculates the same as `calcNeeds` but tracing each module to a use-def declaration pair or
`none` if merely a recorded extra use.
-/
def getExplanations (env : Environment) (i : ModuleIdx) :
Std.HashMap (ModuleIdx × NeedsKind) (Option (Name × Name)) := Id.run do
let mut deps := default
for ci in env.header.moduleData[i]!.constants do
-- Added guard for cases like `structure` that are still exported even if private
let pubCI? := guard (!isPrivateName ci.name) *> (env.setExporting true).find? ci.name
let k := { isExported := pubCI?.isSome, isMeta := isDeclMeta' env ci.name }
deps := visitExpr k ci.name ci.type deps
if let some e := ci.value? (allowOpaque := true) then
let k := if k.isMeta then k else
if pubCI?.any (·.hasValue (allowOpaque := true)) then .pub else .priv
deps := visitExpr k ci.name e deps
for use in getExtraModUses env i do
let j := env.getModuleIdx? use.module |>.get!
if !deps.contains (j, { use with }) then
deps := deps.insert (j, { use with }) none
return deps
where
/-- Accumulate the results from expression `e` into `deps`. -/
visitExpr (k : NeedsKind) name e deps :=
Lean.Expr.foldConsts e deps fun c deps => Id.run do
let mut deps := deps
if let some c := getDepConstName? env c then
if let some j := env.getModuleIdxFor? c then
let k := { k with isMeta := k.isMeta && !isDeclMeta' env c }
if
if let some (some (name', _)) := deps[(j, k)]? then
decide (name.toString.length < name'.toString.length)
else true
then
deps := deps.insert (j, k) (name, c)
return deps
partial def initStateFromEnv (env : Environment) : State := Id.run do
let mut s := { env }
for i in 0...env.header.moduleData.size do
let mod := env.header.moduleData[i]!
let mut imps := #[]
let mut transImps := Needs.empty
for imp in mod.imports do
let j := env.getModuleIdx? imp.module |>.get!
imps := imps.push j
transImps := addTransitiveImps transImps imp j s.transDeps[j]!
s := { s with transDeps := s.transDeps.push transImps }
s := { s with transDepsOrig := s.transDeps }
return s
/-- Register that we want to remove `tgt` from the imports of `src`. -/
def Edits.remove (ed : Edits) (src : Name) (tgt : Import) : Edits :=
match ed.get? src with
| none => ed.insert src (#[tgt], #[])
| some (a, b) => ed.insert src (a.push tgt, b)
/-- Register that we want to add `tgt` to the imports of `src`. -/
def Edits.add (ed : Edits) (src : Name) (tgt : Import) : Edits :=
match ed.get? src with
| none => ed.insert src (#[], #[tgt])
| some (a, b) => ed.insert src (a, b.push tgt)
/-- Parse a source file to extract the location of the import lines, for edits and error messages.
Returns `(path, inputCtx, imports, endPos)` where `imports` is the `Lean.Parser.Module.import` list
and `endPos` is the position of the end of the header.
-/
def parseHeaderFromString (text path : String) :
IO (System.FilePath × (ictx : Parser.InputContext) ×
TSyntax ``Parser.Module.header × String.Pos ictx.fileMap.source) := do
let inputCtx := Parser.mkInputContext text path
let (header, parserState, msgs) Parser.parseHeader inputCtx
if !msgs.toList.isEmpty then -- skip this file if there are parse errors
msgs.forM fun msg => msg.toString >>= IO.println
throw <| .userError "parse errors in file"
-- the insertion point for `add` is the first newline after the imports
let insertion := header.raw.getTailPos?.getD parserState.pos
let insertion := inputCtx.fileMap.source.pos! insertion |>.find (· == '\n') |>.next!
pure path, inputCtx, header, insertion
/-- Parse a source file to extract the location of the import lines, for edits and error messages.
Returns `(path, inputCtx, imports, endPos)` where `imports` is the `Lean.Parser.Module.import` list
and `endPos` is the position of the end of the header.
-/
def parseHeader (srcSearchPath : SearchPath) (mod : Name) :
IO (System.FilePath × (ictx : Parser.InputContext) ×
TSyntax ``Parser.Module.header × String.Pos ictx.fileMap.source) := do
-- Parse the input file
let some path srcSearchPath.findModuleWithExt "lean" mod
| throw <| .userError s!"error: failed to find source file for {mod}"
let text IO.FS.readFile path
parseHeaderFromString text path.toString
def decodeHeader : TSyntax ``Parser.Module.header Option (TSyntax `module) × Option (TSyntax `prelude) × TSyntaxArray ``Parser.Module.import
| `(Parser.Module.header| $[module%$moduleTk?]? $[prelude%$preludeTk?]? $imports*) =>
(moduleTk?.map .mk, preludeTk?.map .mk, imports)
| stx => panic! s!"unexpected header syntax {stx}"
def decodeImport : TSyntax ``Parser.Module.import Import
| `(Parser.Module.import| $[public%$pubTk?]? $[meta%$metaTk?]? import $[all%$allTk?]? $id) =>
{ module := id.getId, isExported := pubTk?.isSome, isMeta := metaTk?.isSome, importAll := allTk?.isSome }
| stx => panic! s!"unexpected syntax {stx}"
/-- Analyze and report issues from module `i`. Arguments:
* `pkg`: the first component of the module name
* `srcSearchPath`: Used to find the path for error reporting purposes
* `i`: the module index
* `needs`: the module's calculated needs
* `addOnly`: if true, only add missing imports, do not remove unused ones
-/
def visitModule (pkg : Name) (srcSearchPath : SearchPath)
(i : Nat) (needs : Needs) (headerStx : TSyntax ``Parser.Module.header) (args : Args)
(addOnly := false) : StateT State IO Unit := do
if isExtraRevModUse ( get).env i then
modify fun s => { s with preserve := s.preserve.union (if args.addPublic then .pub else .priv) {i} }
if args.trace then
IO.eprintln s!"Preserving `{(← get).modNames[i]!}` because of recorded extra rev use"
-- only process modules in the selected package
-- TODO: should be after `keep-downstream` but core headers are not found yet?
if !pkg.isPrefixOf ( get).modNames[i]! then
return
let (module?, prelude?, imports) := decodeHeader headerStx
if module?.any (·.raw.getTrailing?.any (·.toString.contains "shake: keep-downstream")) then
modify fun s => { s with preserve := s.preserve.union (if args.addPublic then .pub else .priv) {i} }
let s get
let addOnly := addOnly || module?.any (·.raw.getTrailing?.any (·.toString.contains "shake: keep-all"))
let mut deps := needs
-- Add additional preserved imports
for impStx in imports do
let imp := decodeImport impStx
let j := s.env.getModuleIdx? imp.module |>.get!
let k := NeedsKind.ofImport imp
if addOnly ||
args.keepPublic && imp.isExported ||
impStx.raw.getTrailing?.any (·.toString.contains "shake: keep") then
deps := deps.union k {j}
if args.trace then
IO.eprintln s!"Adding `{imp}` as additional dependency"
for j in [0:s.mods.size] do
for k in NeedsKind.all do
-- Remove `meta` while preserving, no use-case for preserving `meta` so far.
-- Downgrade to private unless `--add-public` is used.
if s.transDepsOrig[i]!.has k j &&
(s.preserve.has { k with isMeta := false, isExported := false } j ||
s.preserve.has { k with isMeta := false, isExported := true } j) then
deps := deps.union { k with isMeta := false, isExported := k.isExported && args.addPublic } {j}
-- Do transitive reduction of `needs` in `deps`.
if !addOnly then
for j in [0:s.mods.size] do
let transDeps := s.transDeps[j]!
for k in NeedsKind.all do
if deps.has k j then
let transDeps := addTransitiveImps .empty { k with module := .anonymous } j transDeps
for k' in NeedsKind.all do
deps := deps.sub k' (transDeps.sub k' {j} |>.get k')
if prelude?.isNone then
deps := deps.union .pub {s.env.getModuleIdx? `Init |>.get!}
-- Accumulate `transDeps` which is the non-reflexive transitive closure of the still-live imports
let mut transDeps := Needs.empty
let mut alwaysAdd : Array Import := #[] -- to be added even if implied by other imports
for imp in s.mods[i]!.imports do
let j := s.env.getModuleIdx? imp.module |>.get!
let k := NeedsKind.ofImport imp
if deps.has k j || imp.importAll then
transDeps := addTransitiveImps transDeps imp j s.transDeps[j]!
deps := deps.union k {j}
-- skip folder-nested `public (meta)? import`s but remove `meta`
else if s.modNames[i]!.isPrefixOf imp.module then
let imp := { imp with isMeta := false }
let k := { k with isMeta := false }
if args.trace then
IO.eprintln s!"`{imp}` is preserved as folder-nested import"
transDeps := addTransitiveImps transDeps imp j s.transDeps[j]!
deps := deps.union k {j}
if !s.mods[i]!.imports.contains imp then
alwaysAdd := alwaysAdd.push imp
-- If `transDeps` does not cover `deps`, then we have to add back some imports until it does.
-- To minimize new imports we pick only new imports which are not transitively implied by
-- another new import, so we visit module indices in descending order.
let mut keptPrefix := false
let mut newTransDeps := transDeps
let mut toAdd : Array Import := #[]
for j in (0...s.mods.size).toArray.reverse do
for k in NeedsKind.all do
if deps.has k j && !newTransDeps.has k j && !newTransDeps.has { k with isExported := true } j then
-- `add-public/keep-prefix` may change the import and even module we're considering
let mut k := k
let mut imp : Import := { k with module := s.modNames[j]! }
let mut j := j
if args.trace then
IO.eprintln s!"`{imp}` is needed"
if args.addPublic && !k.isExported &&
-- also add as public if previously `public meta`, which could be from automatic porting
(s.transDepsOrig[i]!.has { k with isExported := true } j || s.transDepsOrig[i]!.has { k with isExported := true, isMeta := true } j) then
k := { k with isExported := true }
imp := { imp with isExported := true }
if args.trace then
IO.eprintln s!"* upgrading to `{imp}` because of `--add-public`"
if args.keepPrefix then
let rec tryPrefix : Name Option ModuleIdx
| .str p _ => tryPrefix p <|> (do
let j' s.env.getModuleIdx? p
-- `j'` must be reachable from `i` (allow downgrading from `meta`)
guard <| s.transDepsOrig[i]!.has k j' || s.transDepsOrig[i]!.has { k with isMeta := true } j'
let j'transDeps := addTransitiveImps .empty p j' s.transDeps[j']!
-- `j` must be reachable from `j'` (now downgrading must be done in the other direction)
guard <| j'transDeps.has k j || j'transDeps.has { k with isMeta := false } j
return j')
| _ => none
if let some j' := tryPrefix imp.module then
imp := { imp with module := s.modNames[j']! }
j := j'
keptPrefix := true
if args.trace then
IO.eprintln s!"* upgrading to `{imp}` because of `--keep-prefix`"
if !s.mods[i]!.imports.contains imp then
toAdd := toAdd.push imp
deps := deps.union k {j}
newTransDeps := addTransitiveImps newTransDeps imp j s.transDeps[j]!
if keptPrefix then
-- if an import was replaced by `--keep-prefix`, we did not necessarily visit the modules in
-- dependency order anymore and so we have to redo the transitive closure checking
newTransDeps := transDeps
for j in (0...s.mods.size).toArray.reverse do
for k in NeedsKind.all do
if deps.has k j then
let mut imp : Import := { k with module := s.modNames[j]! }
if toAdd.contains imp && (newTransDeps.has k j || newTransDeps.has { k with isExported := true } j) then
if args.trace then
IO.eprintln s!"Removing `{imp}` from imports to be added because it is now implied"
toAdd := toAdd.erase imp
deps := deps.sub k {j}
else
newTransDeps := addTransitiveImps newTransDeps imp j s.transDeps[j]!
-- now that `toAdd` filtering is done, add `alwaysAdd`
toAdd := alwaysAdd ++ toAdd
-- Any import which is still not in `deps` was unused
let mut toRemove : Array Import := #[]
for imp in s.mods[i]!.imports do
let j := s.env.getModuleIdx? imp.module |>.get!
let k := NeedsKind.ofImport imp
if args.keepImplied && newTransDeps.has k j then
if args.trace && !deps.has k j then
IO.eprintln s!"`{imp}` is implied by other imports"
else if !deps.has k j then
if args.trace then
IO.eprintln s!"`{imp}` is now unused"
toRemove := toRemove.push imp
-- A private import should also be removed if the public version has been added
else if !k.isExported && !imp.importAll && newTransDeps.has { k with isExported := true } j then
if args.trace then
IO.eprintln s!"`{imp}` is already covered by `{ { imp with isExported := true } }`"
toRemove := toRemove.push imp
-- mark and report the removals
modify fun s => { s with
edits := toRemove.foldl (init := s.edits) fun edits imp =>
edits.remove s.modNames[i]! imp }
if !toAdd.isEmpty || !toRemove.isEmpty || args.explain then
if let some path srcSearchPath.findModuleWithExt "lean" s.modNames[i]! then
println! "{path}:"
else
println! "{s.modNames[i]!}:"
if !toRemove.isEmpty then
println! " remove {toRemove}"
if args.githubStyle then
try
let path, inputCtx, stx, endHeader parseHeader srcSearchPath s.modNames[i]!
let (_, _, imports) := decodeHeader stx
for stx in imports do
if toRemove.any fun imp => imp == decodeImport stx then
let pos := inputCtx.fileMap.toPosition stx.raw.getPos?.get!
println! "{path}:{pos.line}:{pos.column+1}: warning: unused import \
(use `lake exe shake --fix` to fix this, or `lake exe shake --update` to ignore)"
if !toAdd.isEmpty then
-- we put the insert message on the beginning of the last import line
let pos := inputCtx.fileMap.toPosition endHeader.offset
println! "{path}:{pos.line-1}:1: warning: \
add {toAdd} instead"
catch _ => pure ()
-- mark and report the additions
modify fun s => { s with
edits := toAdd.foldl (init := s.edits) fun edits imp =>
edits.add s.modNames[i]! imp }
if !toAdd.isEmpty then
println! " add {toAdd}"
-- recalculate transitive dependencies of downstream modules
let mut newTransDepsI := Needs.empty
for imp in s.mods[i]!.imports do
if !toRemove.contains imp then
let j := s.env.getModuleIdx? imp.module |>.get!
newTransDepsI := addTransitiveImps newTransDepsI imp j s.transDeps[j]!
for imp in toAdd do
let j := s.env.getModuleIdx? imp.module |>.get!
newTransDepsI := addTransitiveImps newTransDepsI imp j s.transDeps[j]!
modify fun s => { s with transDeps := s.transDeps.set! i newTransDepsI }
if args.explain then
let explanation := getExplanations s.env i
let sanitize n := if n.hasMacroScopes then (sanitizeName n).run' { options := {} } else n
let run (imp : Import) := do
let j := s.env.getModuleIdx? imp.module |>.get!
let mut k := NeedsKind.ofImport imp
if let some exp? := explanation[(j, k)]? <|> guard args.addPublic *> explanation[(j, { k with isExported := false})]? then
println! " note: `{imp}` required"
if let some (n, c) := exp? then
println! " because `{sanitize n}` refers to `{sanitize c}`"
else
println! " because of additional compile-time dependencies"
for j in s.mods[i]!.imports do
if !toRemove.contains j then
run j
for i in toAdd do run i
/-- Convert a list of module names to a bitset of module indexes -/
def toBitset (s : State) (ns : List Name) : Bitset :=
ns.foldl (init := ) fun c name =>
match s.env.getModuleIdxFor? name with
| some i => c {i}
| none => c
local instance : Ord Import where
compare :=
let _ := @lexOrd
compareOn fun imp => (!imp.isExported, imp.module.toString)
/-- The main entry point. See `help` for more information on arguments. -/
public def main (args : List String) : IO UInt32 := do
initSearchPath ( findSysroot)
-- Parse the arguments
let rec parseArgs (args : Args) : List String Args
| [] => args
| "--help" :: rest => parseArgs { args with help := true } rest
| "--keep-implied" :: rest => parseArgs { args with keepImplied := true } rest
| "--keep-prefix" :: rest => parseArgs { args with keepPrefix := true } rest
| "--keep-public" :: rest => parseArgs { args with keepPublic := true } rest
| "--add-public" :: rest => parseArgs { args with addPublic := true } rest
| "--force" :: rest => parseArgs { args with force := true } rest
| "--fix" :: rest => parseArgs { args with fix := true } rest
| "--explain" :: rest => parseArgs { args with explain := true } rest
| "--trace" :: rest => parseArgs { args with trace := true } rest
| "--gh-style" :: rest => parseArgs { args with githubStyle := true } rest
| "--" :: rest => { args with mods := args.mods ++ rest.map (·.toName) }
| other :: rest => parseArgs { args with mods := args.mods.push other.toName } rest
let args := parseArgs {} args
-- Bail if `--help` is passed
if args.help then
IO.println help
IO.Process.exit 0
if !args.force then
if ( IO.Process.output { cmd := "lake", args := #["build", "--no-build"] }).exitCode != 0 then
IO.println "There are out of date oleans. Run `lake build` or `lake exe cache get` first"
IO.Process.exit 1
-- Determine default module(s) to run shake on
let defaultTargetModules : Array Name try
let (elanInstall?, leanInstall?, lakeInstall?) Lake.findInstall?
let config Lake.MonadError.runEIO <| Lake.mkLoadConfig { elanInstall?, leanInstall?, lakeInstall? }
let some workspace Lake.loadWorkspace config |>.toBaseIO
| throw <| IO.userError "failed to load Lake workspace"
let defaultTargetModules := workspace.root.defaultTargets.flatMap fun target =>
if let some lib := workspace.root.findLeanLib? target then
lib.roots
else if let some exe := workspace.root.findLeanExe? target then
#[exe.config.root]
else
#[]
pure defaultTargetModules
catch _ =>
pure #[]
let srcSearchPath getSrcSearchPath
-- the list of root modules
let mods := if args.mods.isEmpty then defaultTargetModules else args.mods
-- Only submodules of `pkg` will be edited or have info reported on them
let pkg := mods[0]!.components.head!
-- Load all the modules
let imps := mods.map ({ module := · })
let (_, s) importModulesCore imps (isExported := true) |>.run
let s := s.markAllExported
let mut env finalizeImport s (isModule := true) imps {} (leakEnv := false) (loadExts := false)
-- the one env ext we want to initialize
let is := indirectModUseExt.toEnvExtension.getState env
let newState indirectModUseExt.addImportedFn is.importedEntries { env := env, opts := {} }
env := indirectModUseExt.toEnvExtension.setState (asyncMode := .sync) env { is with state := newState }
StateT.run' (s := initStateFromEnv env) do
let s get
-- Run the calculation of the `needs` array in parallel
let needs := s.mods.mapIdx fun i _ =>
Task.spawn fun _ => calcNeeds s i
-- Parse headers in parallel
let headers s.mods.mapIdxM fun i _ =>
if !pkg.isPrefixOf s.modNames[i]! then
pure <| Task.pure <| .ok default, default, default, default
else
BaseIO.asTask (parseHeader srcSearchPath s.modNames[i]! |>.toBaseIO)
if args.fix then
println! "The following changes will be made automatically:"
-- Check all selected modules
for i in [0:s.mods.size], t in needs, header in headers do
match header.get with
| .ok _, _, stx, _ =>
visitModule pkg srcSearchPath i t.get stx args
| .error e =>
println! e.toString
if !args.fix then
-- return error if any issues were found
return if ( get).edits.isEmpty then 0 else 1
-- Apply the edits to existing files
let mut count := 0
for mod in s.modNames, header? in headers do
let some (remove, add) := ( get).edits[mod]? | continue
let add : Array Import := add.qsortOrd
-- Parse the input file
let .ok path, inputCtx, stx, insertion := header?.get | continue
let (_, _, imports) := decodeHeader stx
let text := inputCtx.fileMap.source
-- Calculate the edit result
let mut pos : String.Pos text := text.startPos
let mut out : String := ""
let mut seen : Std.HashSet Import := {}
for stx in imports do
let mod := decodeImport stx
if remove.contains mod || seen.contains mod then
out := out ++ text.extract pos (text.pos! stx.raw.getPos?.get!)
-- We use the end position of the syntax, but include whitespace up to the first newline
pos := text.pos! stx.raw.getTailPos?.get! |>.find '\n' |>.next!
seen := seen.insert mod
out := out ++ text.extract pos insertion
for mod in add do
if !seen.contains mod then
seen := seen.insert mod
out := out ++ s!"{mod}\n"
out := out ++ text.extract insertion text.endPos
IO.FS.writeFile path out
count := count + 1
-- Since we throw an error upon encountering issues, we can be sure that everything worked
-- if we reach this point of the script.
if count > 0 then
println! "Successfully applied {count} suggestions."
else
println! "No edits required."
return 0

View File

@@ -60,7 +60,7 @@ if (arity == fixed + {n}) \{
for j in [n:max + 1] do
let fs := mkFsArgs (j - n)
let sep := if j = n then "" else ", "
emit s!" case {j}: \{ obj* r = FN{j}(f)({fs}{sep}{args}); lean_free_object(f); return r; }\n"
emit s!" case {j}: \{ obj* r = FN{j}(f)({fs}{sep}{args}); lean_free_small_object(f); return r; }\n"
emit " }
}
switch (arity) {\n"
@@ -162,7 +162,7 @@ static obj* fix_args(obj* f, unsigned n, obj*const* as) {
for (unsigned i = 0; i < fixed; i++, source++, target++) {
*target = *source;
}
lean_free_object(f);
lean_free_small_object(f);
}
for (unsigned i = 0; i < n; i++, as++, target++) {
*target = *as;

View File

@@ -10,16 +10,6 @@ Tests language server memory use by repeatedly re-elaborate a given file.
NOTE: only works on Linux for now.
-/
def determineRSS (pid : UInt32) : IO Nat := do
let status IO.FS.readFile s!"/proc/{pid}/smaps_rollup"
let some rssLine := status.splitOn "\n" |>.find? (·.startsWith "Rss:")
| throw <| IO.userError "No RSS in proc status"
let rssLine := rssLine.dropPrefix "Rss:"
let rssLine := rssLine.dropWhile Char.isWhitespace
let some rssInKB := rssLine.takeWhile Char.isDigit |>.toNat?
| throw <| IO.userError "Cannot parse RSS"
return rssInKB
def main (args : List String) : IO Unit := do
let leanCmd :: file :: iters :: args := args | panic! "usage: script <lean> <file> <#iterations> <server-args>..."
let file IO.FS.realPath file
@@ -44,14 +34,11 @@ def main (args : List String) : IO Unit := do
let text IO.FS.readFile file
let (_, headerEndPos, _) Elab.parseImports text
let headerEndPos := FileMap.ofString text |>.leanPosToLspPos headerEndPos
let n := iters.toNat!
let mut lastRSS? : Option Nat := none
let mut totalRSSDelta : Int := 0
let mut requestNo : Nat := 1
let mut versionNo : Nat := 1
Ipc.writeNotification "textDocument/didOpen", {
textDocument := { uri := uri, languageId := "lean", version := 1, text := text } : DidOpenTextDocumentParams }
for i in [0:n] do
for i in [0:iters.toNat!] do
if i > 0 then
versionNo := versionNo + 1
let params : DidChangeTextDocumentParams := {
@@ -74,16 +61,9 @@ def main (args : List String) : IO Unit := do
IO.eprintln diag.message
requestNo := requestNo + 1
let rss determineRSS ( read).pid
-- The first `didChange` usually results in a significantly higher RSS increase than
-- the others, so we ignore it.
if i > 1 then
if let some lastRSS := lastRSS? then
totalRSSDelta := totalRSSDelta + ((rss : Int) - (lastRSS : Int))
lastRSS? := some rss
let avgRSSDelta := totalRSSDelta / (n - 2)
IO.println s!"avg-reelab-rss-delta: {avgRSSDelta}"
let status IO.FS.readFile s!"/proc/{(← read).pid}/status"
for line in status.splitOn "\n" |>.filter (·.startsWith "RssAnon") do
IO.eprintln line
let _ Ipc.collectDiagnostics requestNo uri versionNo
( Ipc.stdin).writeLspMessage (Message.notification "exit" none)

View File

@@ -1,89 +0,0 @@
import Lean.Data.Lsp
import Lean.Elab.Import
open Lean
open Lean.Lsp
open Lean.JsonRpc
/-!
Tests watchdog memory use by repeatedly re-elaborate a given file.
NOTE: only works on Linux for now.
-/
def determineRSS (pid : UInt32) : IO Nat := do
let status IO.FS.readFile s!"/proc/{pid}/smaps_rollup"
let some rssLine := status.splitOn "\n" |>.find? (·.startsWith "Rss:")
| throw <| IO.userError "No RSS in proc status"
let rssLine := rssLine.dropPrefix "Rss:"
let rssLine := rssLine.dropWhile Char.isWhitespace
let some rssInKB := rssLine.takeWhile Char.isDigit |>.toNat?
| throw <| IO.userError "Cannot parse RSS"
return rssInKB
def main (args : List String) : IO Unit := do
let leanCmd :: file :: iters :: args := args | panic! "usage: script <lean> <file> <#iterations> <server-args>..."
let file IO.FS.realPath file
let uri := s!"file://{file}"
Ipc.runWith leanCmd (#["--server", "-DstderrAsMessages=false"] ++ args ++ #[uri]) do
let capabilities := {
textDocument? := some {
completion? := some {
completionItem? := some {
insertReplaceSupport? := true
}
}
}
}
Ipc.writeRequest 0, "initialize", { capabilities : InitializeParams }
discard <| Ipc.readResponseAs 0 InitializeResult
Ipc.writeNotification "initialized", InitializedParams.mk
let text IO.FS.readFile file
let (_, headerEndPos, _) Elab.parseImports text
let headerEndPos := FileMap.ofString text |>.leanPosToLspPos headerEndPos
let n := iters.toNat!
let mut lastRSS? : Option Nat := none
let mut totalRSSDelta : Int := 0
let mut requestNo : Nat := 1
let mut versionNo : Nat := 1
Ipc.writeNotification "textDocument/didOpen", {
textDocument := { uri := uri, languageId := "lean", version := 1, text := text } : DidOpenTextDocumentParams }
for i in [0:iters.toNat!] do
if i > 0 then
versionNo := versionNo + 1
let params : DidChangeTextDocumentParams := {
textDocument := {
uri := uri
version? := versionNo
}
contentChanges := #[TextDocumentContentChangeEvent.rangeChange {
start := headerEndPos
«end» := headerEndPos
} " "]
}
let params := toJson params
Ipc.writeNotification "textDocument/didChange", params
requestNo := requestNo + 1
let diags Ipc.collectDiagnostics requestNo uri versionNo
if let some diags := diags then
for diag in diags.param.diagnostics do
IO.eprintln diag.message
requestNo := requestNo + 1
Ipc.waitForILeans requestNo uri versionNo
let rss determineRSS ( read).pid
-- The first `didChange` usually results in a significantly higher RSS increase than
-- the others, so we ignore it.
if i > 1 then
if let some lastRSS := lastRSS? then
totalRSSDelta := totalRSSDelta + ((rss : Int) - (lastRSS : Int))
lastRSS? := some rss
let avgRSSDelta := totalRSSDelta / (n - 2)
IO.println s!"avg-reelab-rss-delta: {avgRSSDelta}"
let _ Ipc.collectDiagnostics requestNo uri versionNo
Ipc.shutdown requestNo
discard <| Ipc.waitForExit

View File

@@ -1,441 +0,0 @@
#!/usr/bin/env python3
"""
build_artifact.py: Download pre-built CI artifacts for a Lean commit.
Usage:
build_artifact.py # Download artifact for current HEAD
build_artifact.py --sha abc1234 # Download artifact for specific commit
build_artifact.py --clear-cache # Clear artifact cache
This script downloads pre-built binaries from GitHub Actions CI runs,
which is much faster than building from source (~30s vs 2-5min).
Artifacts are cached in ~/.cache/lean_build_artifact/ for reuse.
"""
import argparse
import json
import os
import platform
import shutil
import subprocess
import sys
import urllib.request
import urllib.error
from pathlib import Path
from typing import Optional
# Constants
GITHUB_API_BASE = "https://api.github.com"
LEAN4_REPO = "leanprover/lean4"
# CI artifact cache
CACHE_DIR = Path.home() / '.cache' / 'lean_build_artifact'
ARTIFACT_CACHE = CACHE_DIR
# Sentinel value indicating CI failed (don't bother building locally)
CI_FAILED = object()
# ANSI colors for terminal output
class Colors:
RED = '\033[91m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
BLUE = '\033[94m'
BOLD = '\033[1m'
RESET = '\033[0m'
def color(text: str, c: str) -> str:
"""Apply color to text if stdout is a tty."""
if sys.stdout.isatty():
return f"{c}{text}{Colors.RESET}"
return text
def error(msg: str) -> None:
"""Print error message and exit."""
print(color(f"Error: {msg}", Colors.RED), file=sys.stderr)
sys.exit(1)
def warn(msg: str) -> None:
"""Print warning message."""
print(color(f"Warning: {msg}", Colors.YELLOW), file=sys.stderr)
def info(msg: str) -> None:
"""Print info message."""
print(color(msg, Colors.BLUE), file=sys.stderr)
def success(msg: str) -> None:
"""Print success message."""
print(color(msg, Colors.GREEN), file=sys.stderr)
# -----------------------------------------------------------------------------
# Platform detection
# -----------------------------------------------------------------------------
def get_artifact_name() -> Optional[str]:
"""Get CI artifact name for current platform."""
system = platform.system()
machine = platform.machine()
if system == 'Darwin':
if machine == 'arm64':
return 'build-macOS aarch64'
return 'build-macOS' # Intel
elif system == 'Linux':
if machine == 'aarch64':
return 'build-Linux aarch64'
return 'build-Linux release'
# Windows not supported for CI artifact download
return None
# -----------------------------------------------------------------------------
# GitHub API helpers
# -----------------------------------------------------------------------------
_github_token_warning_shown = False
def get_github_token() -> Optional[str]:
"""Get GitHub token from environment or gh CLI."""
global _github_token_warning_shown
# Check environment variable first
token = os.environ.get('GITHUB_TOKEN')
if token:
return token
# Try to get token from gh CLI
try:
result = subprocess.run(
['gh', 'auth', 'token'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0 and result.stdout.strip():
return result.stdout.strip()
except (FileNotFoundError, subprocess.TimeoutExpired):
pass
# Warn once if no token available
if not _github_token_warning_shown:
_github_token_warning_shown = True
warn("No GitHub authentication found. API rate limits may apply.")
warn("Run 'gh auth login' or set GITHUB_TOKEN to avoid rate limiting.")
return None
def github_api_request(url: str) -> dict:
"""Make a GitHub API request and return JSON response."""
headers = {
'Accept': 'application/vnd.github.v3+json',
'User-Agent': 'build-artifact'
}
token = get_github_token()
if token:
headers['Authorization'] = f'token {token}'
req = urllib.request.Request(url, headers=headers)
try:
with urllib.request.urlopen(req, timeout=30) as response:
return json.loads(response.read().decode())
except urllib.error.HTTPError as e:
if e.code == 403:
error(f"GitHub API rate limit exceeded. Set GITHUB_TOKEN environment variable to increase limit.")
elif e.code == 404:
error(f"GitHub resource not found: {url}")
else:
error(f"GitHub API error: {e.code} {e.reason}")
except urllib.error.URLError as e:
error(f"Network error accessing GitHub API: {e.reason}")
# -----------------------------------------------------------------------------
# CI artifact cache functions
# -----------------------------------------------------------------------------
def get_cache_path(sha: str) -> Path:
"""Get cache directory for a commit's artifact."""
return ARTIFACT_CACHE / sha[:12]
def is_cached(sha: str) -> bool:
"""Check if artifact for this commit is already cached and valid."""
cache_path = get_cache_path(sha)
return cache_path.exists() and (cache_path / 'bin' / 'lean').exists()
def check_zstd_support() -> bool:
"""Check if tar supports zstd compression."""
try:
result = subprocess.run(
['tar', '--zstd', '--version'],
capture_output=True,
timeout=5
)
return result.returncode == 0
except (subprocess.TimeoutExpired, FileNotFoundError):
return False
def check_gh_available() -> bool:
"""Check if gh CLI is available and authenticated."""
try:
result = subprocess.run(
['gh', 'auth', 'status'],
capture_output=True,
timeout=10
)
return result.returncode == 0
except (subprocess.TimeoutExpired, FileNotFoundError):
return False
def download_ci_artifact(sha: str, quiet: bool = False):
"""
Try to download CI artifact for a commit.
Returns:
- Path to extracted toolchain directory if available
- CI_FAILED sentinel if CI run failed (don't bother building locally)
- None if no artifact available but local build might work
"""
# Check cache first
if is_cached(sha):
return get_cache_path(sha)
artifact_name = get_artifact_name()
if artifact_name is None:
return None # Unsupported platform
cache_path = get_cache_path(sha)
try:
# Query for CI workflow run for this commit, including status
# Note: Query parameters must be in the URL for GET requests
result = subprocess.run(
['gh', 'api', f'repos/{LEAN4_REPO}/actions/runs?head_sha={sha}&per_page=100',
'--jq', r'.workflow_runs[] | select(.name == "CI") | "\(.id) \(.conclusion // "null")"'],
capture_output=True,
text=True,
timeout=30
)
if result.returncode != 0 or not result.stdout.strip():
return None # No CI run found (old commit?)
# Parse "run_id conclusion" format
line = result.stdout.strip().split('\n')[0]
parts = line.split(' ', 1)
run_id = parts[0]
conclusion = parts[1] if len(parts) > 1 else "null"
# Check if the desired artifact exists for this run
result = subprocess.run(
['gh', 'api', f'repos/{LEAN4_REPO}/actions/runs/{run_id}/artifacts',
'--jq', f'.artifacts[] | select(.name == "{artifact_name}") | .id'],
capture_output=True,
text=True,
timeout=30
)
if result.returncode != 0 or not result.stdout.strip():
# No artifact available
# If CI failed and no artifact, the build itself likely failed - skip
if conclusion == "failure":
return CI_FAILED
# Otherwise (in progress, expired, etc.) - fall back to local build
return None
# Download artifact
cache_path.mkdir(parents=True, exist_ok=True)
if not quiet:
print("downloading CI artifact... ", end='', flush=True)
result = subprocess.run(
['gh', 'run', 'download', run_id,
'-n', artifact_name,
'-R', LEAN4_REPO,
'-D', str(cache_path)],
capture_output=True,
text=True,
timeout=600 # 10 minutes for large downloads
)
if result.returncode != 0:
shutil.rmtree(cache_path, ignore_errors=True)
return None
# Extract tar.zst - find the file (name varies by platform/version)
tar_files = list(cache_path.glob('*.tar.zst'))
if not tar_files:
shutil.rmtree(cache_path, ignore_errors=True)
return None
tar_file = tar_files[0]
if not quiet:
print("extracting... ", end='', flush=True)
result = subprocess.run(
['tar', '--zstd', '-xf', tar_file.name],
cwd=cache_path,
capture_output=True,
timeout=300
)
if result.returncode != 0:
shutil.rmtree(cache_path, ignore_errors=True)
return None
# Move contents up from lean-VERSION-PLATFORM/ to cache_path/
# The extracted directory name varies (e.g., lean-4.15.0-linux, lean-4.15.0-darwin_aarch64)
extracted_dirs = [d for d in cache_path.iterdir() if d.is_dir() and d.name.startswith('lean-')]
if extracted_dirs:
extracted = extracted_dirs[0]
for item in extracted.iterdir():
dest = cache_path / item.name
if dest.exists():
if dest.is_dir():
shutil.rmtree(dest)
else:
dest.unlink()
shutil.move(str(item), str(cache_path / item.name))
extracted.rmdir()
# Clean up tar file
tar_file.unlink()
# Verify the extraction worked
if not (cache_path / 'bin' / 'lean').exists():
shutil.rmtree(cache_path, ignore_errors=True)
return None
return cache_path
except (subprocess.TimeoutExpired, FileNotFoundError):
shutil.rmtree(cache_path, ignore_errors=True)
return None
# -----------------------------------------------------------------------------
# Git helpers
# -----------------------------------------------------------------------------
def get_current_commit() -> str:
"""Get the current git HEAD commit SHA."""
try:
result = subprocess.run(
['git', 'rev-parse', 'HEAD'],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
return result.stdout.strip()
error(f"Failed to get current commit: {result.stderr.strip()}")
except subprocess.TimeoutExpired:
error("Timeout getting current commit")
except FileNotFoundError:
error("git not found")
def resolve_sha(short_sha: str) -> str:
"""Resolve a (possibly short) SHA to full 40-character SHA using git rev-parse."""
if len(short_sha) == 40:
return short_sha
try:
result = subprocess.run(
['git', 'rev-parse', short_sha],
capture_output=True,
text=True,
timeout=5
)
if result.returncode == 0:
full_sha = result.stdout.strip()
if len(full_sha) == 40:
return full_sha
error(f"Cannot resolve SHA '{short_sha}': {result.stderr.strip() or 'not found in repository'}")
except subprocess.TimeoutExpired:
error(f"Timeout resolving SHA '{short_sha}'")
except FileNotFoundError:
error("git not found - required for SHA resolution")
# -----------------------------------------------------------------------------
# Main
# -----------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description='Download pre-built CI artifacts for a Lean commit.',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
This script downloads pre-built binaries from GitHub Actions CI runs,
which is much faster than building from source (~30s vs 2-5min).
Artifacts are cached in ~/.cache/lean_build_artifact/ for reuse.
Examples:
build_artifact.py # Download for current HEAD
build_artifact.py --sha abc1234 # Download for specific commit
build_artifact.py --clear-cache # Clear cache to free disk space
"""
)
parser.add_argument('--sha', metavar='SHA',
help='Commit SHA to download artifact for (default: current HEAD)')
parser.add_argument('--clear-cache', action='store_true',
help='Clear artifact cache and exit')
parser.add_argument('--quiet', '-q', action='store_true',
help='Suppress progress messages (still prints result path)')
args = parser.parse_args()
# Handle cache clearing
if args.clear_cache:
if ARTIFACT_CACHE.exists():
size = sum(f.stat().st_size for f in ARTIFACT_CACHE.rglob('*') if f.is_file())
shutil.rmtree(ARTIFACT_CACHE)
info(f"Cleared cache at {ARTIFACT_CACHE} ({size / 1024 / 1024:.1f} MB)")
else:
info(f"Cache directory does not exist: {ARTIFACT_CACHE}")
return
# Get commit SHA
if args.sha:
sha = resolve_sha(args.sha)
else:
sha = get_current_commit()
if not args.quiet:
info(f"Commit: {sha[:12]}")
# Check prerequisites
if not check_gh_available():
error("gh CLI not available or not authenticated. Run 'gh auth login' first.")
if not check_zstd_support():
error("tar does not support zstd compression. Install zstd or a newer tar.")
artifact_name = get_artifact_name()
if artifact_name is None:
error(f"No CI artifacts available for this platform ({platform.system()} {platform.machine()})")
if not args.quiet:
info(f"Platform: {artifact_name}")
# Check cache
if is_cached(sha):
path = get_cache_path(sha)
if not args.quiet:
success("Using cached artifact")
print(path)
return
# Download artifact
result = download_ci_artifact(sha, quiet=args.quiet)
if result is CI_FAILED:
if not args.quiet:
print() # End the "downloading..." line
error(f"CI build failed for commit {sha[:12]}")
elif result is None:
if not args.quiet:
print() # End the "downloading..." line
error(f"No CI artifact available for commit {sha[:12]}")
else:
if not args.quiet:
print(color("done", Colors.GREEN))
print(result)
if __name__ == '__main__':
main()

View File

@@ -1,11 +0,0 @@
name = "scripts"
[[lean_exe]]
name = "modulize"
root = "Modulize"
[[lean_exe]]
name = "shake"
root = "Shake"
# needed by `Lake.loadWorkspace`
supportInterpreter = true

File diff suppressed because it is too large Load Diff

View File

@@ -1,307 +0,0 @@
/-
Copyright Strata Contributors
SPDX-License-Identifier: Apache-2.0 OR MIT
-/
namespace Strata
namespace Python
/-
Parser and translator for some basic regular expression patterns supported by
Python's `re` library
Ref.: https://docs.python.org/3/library/re.html
Also see
https://github.com/python/cpython/blob/759a048d4bea522fda2fe929be0fba1650c62b0e/Lib/re/_parser.py
for a reference implementation.
-/
-------------------------------------------------------------------------------
inductive ParseError where
/--
`patternError` is raised when Python's `re.patternError` exception is
raised.
[Reference: Python's re exceptions](https://docs.python.org/3/library/re.html#exceptions):
"Exception raised when a string passed to one of the functions here is not a
valid regular expression (for example, it might contain unmatched
parentheses) or when some other error occurs during compilation or matching.
It is never an error if a string contains no match for a pattern."
-/
| patternError (message : String) (pattern : String) (pos : String.Pos.Raw)
/--
`unimplemented` is raised whenever we don't support some regex operations
(e.g., lookahead assertions).
-/
| unimplemented (message : String) (pattern : String) (pos : String.Pos.Raw)
deriving Repr
def ParseError.toString : ParseError String
| .patternError msg pat pos => s!"Pattern error at position {pos.byteIdx}: {msg} in pattern '{pat}'"
| .unimplemented msg pat pos => s!"Unimplemented at position {pos.byteIdx}: {msg} in pattern '{pat}'"
instance : ToString ParseError where
toString := ParseError.toString
-------------------------------------------------------------------------------
/--
Regular Expression Nodes
-/
inductive RegexAST where
/-- Single literal character: `a` -/
| char : Char RegexAST
/-- Character range: `[a-z]` -/
| range : Char Char RegexAST
/-- Alternation: `a|b` -/
| union : RegexAST RegexAST RegexAST
/-- Concatenation: `ab` -/
| concat : RegexAST RegexAST RegexAST
/-- Any character: `.` -/
| anychar : RegexAST
/-- Zero or more: `a*` -/
| star : RegexAST RegexAST
/-- One or more: `a+` -/
| plus : RegexAST RegexAST
/-- Zero or one: `a?` -/
| optional : RegexAST RegexAST
/-- Bounded repetition: `a{n,m}` -/
| loop : RegexAST Nat Nat RegexAST
/-- Start of string: `^` -/
| anchor_start : RegexAST
/-- End of string: `$` -/
| anchor_end : RegexAST
/-- Grouping: `(abc)` -/
| group : RegexAST RegexAST
/-- Empty string: `()` or `""` -/
| empty : RegexAST
/-- Complement: `[^a-z]` -/
| complement : RegexAST RegexAST
deriving Inhabited, Repr
-------------------------------------------------------------------------------
/-- Parse character class like [a-z], [0-9], etc. into union of ranges and
chars. Note that this parses `|` as a character. -/
def parseCharClass (s : String) (pos : String.Pos.Raw) : Except ParseError (RegexAST × String.Pos.Raw) := do
if pos.get? s != some '[' then throw (.patternError "Expected '[' at start of character class" s pos)
let mut i := pos.next s
-- Check for complement (negation) with leading ^
let isComplement := !i.atEnd s && i.get? s == some '^'
if isComplement then
i := i.next s
let mut result : Option RegexAST := none
-- Process each element in the character class.
while !i.atEnd s && i.get? s != some ']' do
-- Uncommenting this makes the code stop
--dbg_trace "Working" (pure ())
let some c1 := i.get? s | throw (.patternError "Invalid character in class" s i)
let i1 := i.next s
-- Check for range pattern: c1-c2.
if !i1.atEnd s && i1.get? s == some '-' then
let i2 := i1.next s
if !i2.atEnd s && i2.get? s != some ']' then
let some c2 := i2.get? s | throw (.patternError "Invalid character in range" s i2)
if c1 > c2 then
throw (.patternError s!"Invalid character range [{c1}-{c2}]: \
start character '{c1}' is greater than end character '{c2}'" s i)
let r := RegexAST.range c1 c2
-- Union with previous elements.
result := some (match result with | none => r | some prev => RegexAST.union prev r)
i := i2.next s
continue
-- Single character.
let r := RegexAST.char c1
result := some (match result with | none => r | some prev => RegexAST.union prev r)
i := i.next s
let some ast := result | throw (.patternError "Unterminated character set" s pos)
let finalAst := if isComplement then RegexAST.complement ast else ast
pure (finalAst, i.next s)
-------------------------------------------------------------------------------
/-- Parse numeric repeats like `{10}` or `{1,10}` into min and max bounds. -/
def parseBounds (s : String) (pos : String.Pos.Raw) : Except ParseError (Nat × Nat × String.Pos.Raw) := do
if pos.get? s != some '{' then throw (.patternError "Expected '{' at start of bounds" s pos)
let mut i := pos.next s
let mut numStr := ""
-- Parse first number.
while !i.atEnd s && (i.get? s).any Char.isDigit do
numStr := numStr.push ((i.get? s).get!)
i := i.next s
let some n := numStr.toNat? | throw (.patternError "Invalid minimum bound" s pos)
-- Check for comma (range) or closing brace (exact count).
match i.get? s with
| some '}' => pure (n, n, i.next s) -- {n} means exactly n times.
| some ',' =>
i := i.next s
-- Parse maximum bound
numStr := ""
while !i.atEnd s && (i.get? s).any Char.isDigit do
numStr := numStr.push ((i.get? s).get!)
i := i.next s
let some max := numStr.toNat? | throw (.patternError "Invalid maximum bound" s i)
if i.get? s != some '}' then throw (.patternError "Expected '}' at end of bounds" s i)
-- Validate bounds order
if max < n then
throw (.patternError s!"Invalid repeat bounds \{{n},{max}}: \
maximum {max} is less than minimum {n}" s pos)
pure (n, max, i.next s)
| _ => throw (.patternError "Invalid bounds syntax" s i)
-------------------------------------------------------------------------------
mutual
/--
Parse atom: single element (char, class, anchor, group) with optional
quantifier. Stops at the first `|`.
-/
partial def parseAtom (s : String) (pos : String.Pos.Raw) : Except ParseError (RegexAST × String.Pos.Raw) := do
if pos.atEnd s then throw (.patternError "Unexpected end of regex" s pos)
let some c := pos.get? s | throw (.patternError "Invalid position" s pos)
-- Detect invalid quantifier at start
if c == '*' || c == '+' || c == '{' || c == '?' then
throw (.patternError s!"Quantifier '{c}' at position {pos} has nothing to quantify" s pos)
-- Detect unbalanced closing parenthesis
if c == ')' then
throw (.patternError "Unbalanced parenthesis" s pos)
-- Parse base element (anchor, char class, group, anychar, escape, or single char).
let (base, nextPos) match c with
| '^' => pure (RegexAST.anchor_start, pos.next s)
| '$' => pure (RegexAST.anchor_end, pos.next s)
| '[' => parseCharClass s pos
| '(' => parseExplicitGroup s pos
| '.' => pure (RegexAST.anychar, pos.next s)
| '\\' =>
-- Handle escape sequence.
-- Note: Python uses a single backslash as an escape character, but Lean
-- strings need to escape that. After DDMification, we will see two
-- backslashes in Strata for every Python backslash.
let nextPos := pos.next s
if nextPos.atEnd s then throw (.patternError "Incomplete escape sequence at end of regex" s pos)
let some escapedChar := nextPos.get? s | throw (.patternError "Invalid escape position" s nextPos)
-- Check for special sequences (unsupported right now).
match escapedChar with
| 'A' | 'b' | 'B' | 'd' | 'D' | 's' | 'S' | 'w' | 'W' | 'z' | 'Z' =>
throw (.unimplemented s!"Special sequence \\{escapedChar} is not supported" s pos)
| 'a' | 'f' | 'n' | 'N' | 'r' | 't' | 'u' | 'U' | 'v' | 'x' =>
throw (.unimplemented s!"Escape sequence \\{escapedChar} is not supported" s pos)
| c =>
if c.isDigit then
throw (.unimplemented s!"Backreference \\{c} is not supported" s pos)
else
pure (RegexAST.char escapedChar, nextPos.next s)
| _ => pure (RegexAST.char c, pos.next s)
-- Check for numeric repeat suffix on base element (but not on anchors)
match base with
| .anchor_start | .anchor_end => pure (base, nextPos)
| _ =>
if !nextPos.atEnd s then
match nextPos.get? s with
| some '{' =>
let (min, max, finalPos) parseBounds s nextPos
pure (RegexAST.loop base min max, finalPos)
| some '*' =>
let afterStar := nextPos.next s
if !afterStar.atEnd s then
match afterStar.get? s with
| some '?' => throw (.unimplemented "Non-greedy quantifier *? is not supported" s nextPos)
| some '+' => throw (.unimplemented "Possessive quantifier *+ is not supported" s nextPos)
| _ => pure (RegexAST.star base, afterStar)
else pure (RegexAST.star base, afterStar)
| some '+' =>
let afterPlus := nextPos.next s
if !afterPlus.atEnd s then
match afterPlus.get? s with
| some '?' => throw (.unimplemented "Non-greedy quantifier +? is not supported" s nextPos)
| some '+' => throw (.unimplemented "Possessive quantifier ++ is not supported" s nextPos)
| _ => pure (RegexAST.plus base, afterPlus)
else pure (RegexAST.plus base, afterPlus)
| some '?' =>
let afterQuestion := nextPos.next s
if !afterQuestion.atEnd s then
match afterQuestion.get? s with
| some '?' => throw (.unimplemented "Non-greedy quantifier ?? is not supported" s nextPos)
| some '+' => throw (.unimplemented "Possessive quantifier ?+ is not supported" s nextPos)
| _ => pure (RegexAST.optional base, afterQuestion)
else pure (RegexAST.optional base, afterQuestion)
| _ => pure (base, nextPos)
else
pure (base, nextPos)
/-- Parse explicit group with parentheses. -/
partial def parseExplicitGroup (s : String) (pos : String.Pos.Raw) : Except ParseError (RegexAST × String.Pos.Raw) := do
if pos.get? s != some '(' then throw (.patternError "Expected '(' at start of group" s pos)
let mut i := pos.next s
-- Check for extension notation (?...
if !i.atEnd s && i.get? s == some '?' then
let i1 := i.next s
if !i1.atEnd s then
match i1.get? s with
| some '=' => throw (.unimplemented "Positive lookahead (?=...) is not supported" s pos)
| some '!' => throw (.unimplemented "Negative lookahead (?!...) is not supported" s pos)
| _ => throw (.unimplemented "Extension notation (?...) is not supported" s pos)
let (inner, finalPos) parseGroup s i (some ')')
pure (.group inner, finalPos)
/-- Parse group: handles alternation and concatenation at current scope. -/
partial def parseGroup (s : String) (pos : String.Pos.Raw) (endChar : Option Char) :
Except ParseError (RegexAST × String.Pos.Raw) := do
let mut alternatives : List (List RegexAST) := [[]]
let mut i := pos
-- Parse until end of string or `endChar`.
while !i.atEnd s && (endChar.isNone || i.get? s != endChar) do
if i.get? s == some '|' then
-- Push a new scope to `alternatives`.
alternatives := [] :: alternatives
i := i.next s
else
let (ast, nextPos) parseAtom s i
alternatives := match alternatives with
| [] => [[ast]]
| head :: tail => (ast :: head) :: tail
i := nextPos
-- Check for expected end character.
if let some ec := endChar then
if i.get? s != some ec then
throw (.patternError s!"Expected '{ec}'" s i)
i := i.next s
-- Build result: concatenate each alternative, then union them.
let concatAlts := alternatives.reverse.filterMap fun alt =>
match alt.reverse with
| [] => -- Empty regex.
some (.empty)
| [single] => some single
| head :: tail => some (tail.foldl RegexAST.concat head)
match concatAlts with
| [] => pure (.empty, i)
| [single] => pure (single, i)
| head :: tail => pure (tail.foldl RegexAST.union head, i)
end
/-- info: Except.ok (Strata.Python.RegexAST.range 'A' 'z', { byteIdx := 5 }) -/
#guard_msgs in
#eval parseCharClass "[A-z]" 0
-- Test code: Print done
#print "Done!"

View File

@@ -1 +0,0 @@
lean4

View File

@@ -5,11 +5,8 @@ set -euo pipefail
[ $# -eq 1 ] || (echo "usage: $0 <lean4 PR #>"; exit 1)
echo "Warning: the speedcenter is probably not listening on mathlib4-nightly-testing yet."
echo "If you're using this script, please contact @kim-em or @Kha to get this set up, and then remove this notice."
LEAN_PR=$1
PR_RESPONSE=$(gh api repos/leanprover-community/mathlib4-nightly-testing/pulls -X POST -f head=lean-pr-testing-$LEAN_PR -f base=nightly-testing -f title="leanprover/lean4#$LEAN_PR benchmarking" -f draft=true -f body="ignore me")
PR_RESPONSE=$(gh api repos/leanprover-community/mathlib4/pulls -X POST -f head=lean-pr-testing-$LEAN_PR -f base=nightly-testing -f title="leanprover/lean4#$LEAN_PR benchmarking" -f draft=true -f body="ignore me")
PR_NUMBER=$(echo "$PR_RESPONSE" | jq '.number')
echo "opened https://github.com/leanprover-community/mathlib4-nightly-testing/pull/$PR_NUMBER"
gh api repos/leanprover-community/mathlib4-nightly-testing/issues/$PR_NUMBER/comments -X POST -f body="!bench" > /dev/null
echo "opened https://github.com/leanprover-community/mathlib4/pull/$PR_NUMBER"
gh api repos/leanprover-community/mathlib4/issues/$PR_NUMBER/comments -X POST -f body="!bench" > /dev/null

View File

@@ -5,7 +5,6 @@ Merge a tag into a branch on a GitHub repository.
This script checks if a specified tag can be merged cleanly into a branch and performs
the merge if possible. If the merge cannot be done cleanly, it prints a helpful message.
Merge conflicts in the lean-toolchain file are automatically resolved by accepting the incoming changes.
Usage:
python3 merge_remote.py <org/repo> <branch> <tag>
@@ -59,32 +58,6 @@ def clone_repo(repo, temp_dir):
return True
def get_conflicted_files():
"""Get list of files with merge conflicts."""
result = run_command("git diff --name-only --diff-filter=U", check=False)
if result.returncode == 0:
return result.stdout.strip().split('\n') if result.stdout.strip() else []
return []
def resolve_lean_toolchain_conflict(tag):
"""Resolve lean-toolchain conflict by accepting incoming (tag) changes."""
print("Resolving lean-toolchain conflict by accepting incoming changes...")
# Accept theirs (incoming) version for lean-toolchain
result = run_command(f"git checkout --theirs lean-toolchain", check=False)
if result.returncode != 0:
print("Failed to resolve lean-toolchain conflict")
return False
# Add the resolved file
add_result = run_command("git add lean-toolchain", check=False)
if add_result.returncode != 0:
print("Failed to stage resolved lean-toolchain")
return False
return True
def check_and_merge(repo, branch, tag, temp_dir):
"""Check if tag can be merged into branch and perform the merge if possible."""
# Change to the temporary directory
@@ -125,37 +98,12 @@ def check_and_merge(repo, branch, tag, temp_dir):
# Try merging the tag directly
print(f"Merging {tag} into {branch}...")
merge_result = run_command(f"git merge {tag} --no-edit", check=False)
if merge_result.returncode != 0:
# Check which files have conflicts
conflicted_files = get_conflicted_files()
if conflicted_files == ['lean-toolchain']:
# Only lean-toolchain has conflicts, resolve it
print("Merge conflict detected only in lean-toolchain.")
if resolve_lean_toolchain_conflict(tag):
# Continue the merge with the resolved conflict
print("Continuing merge with resolved lean-toolchain...")
continue_result = run_command(f"git commit --no-edit", check=False)
if continue_result.returncode != 0:
print("Failed to complete merge after resolving lean-toolchain")
run_command("git merge --abort")
return False
else:
print("Failed to resolve lean-toolchain conflict")
run_command("git merge --abort")
return False
else:
# Other files have conflicts, or unable to determine
if conflicted_files:
print(f"Cannot merge {tag} cleanly into {branch}.")
print(f"Merge conflicts in: {', '.join(conflicted_files)}")
else:
print(f"Cannot merge {tag} cleanly into {branch}.")
print("Merge conflicts would occur.")
print("Aborting merge.")
run_command("git merge --abort")
return False
print(f"Cannot merge {tag} cleanly into {branch}.")
print("Merge conflicts would occur. Aborting merge.")
run_command("git merge --abort")
return False
print(f"Pushing changes to remote...")
push_result = run_command(f"git push origin {branch}")

View File

@@ -58,11 +58,7 @@ OPTIONS=()
# We build cadical using the custom toolchain on Linux to avoid glibc versioning issues
echo -n " -DLEAN_STANDALONE=ON -DCADICAL_USE_CUSTOM_CXX=ON"
echo -n " -DCMAKE_CXX_COMPILER=$PWD/llvm-host/bin/clang++ -DLEAN_CXX_STDLIB='-Wl,-Bstatic -lc++ -lc++abi -Wl,-Bdynamic'"
# these should also be used for cadical, so do not use `LEAN_EXTRA_CXX_FLAGS` here
echo -n " -DCMAKE_CXX_FLAGS='--sysroot $PWD/llvm -idirafter $GLIBC_DEV/include ${EXTRA_FLAGS:-}'"
# the above does not include linker flags which will be added below based on context, so skip the
# generic check by cmake
echo -n " -DCMAKE_C_COMPILER_WORKS=1 -DCMAKE_CXX_COMPILER_WORKS=1"
echo -n " -DLEAN_EXTRA_CXX_FLAGS='--sysroot $PWD/llvm -idirafter $GLIBC_DEV/include ${EXTRA_FLAGS:-}'"
# use target compiler directly when not cross-compiling
if [[ -L llvm-host ]]; then
echo -n " -DCMAKE_C_COMPILER=$PWD/stage1/bin/clang"

View File

@@ -50,4 +50,5 @@ echo -n " -DLEANC_INTERNAL_LINKER_FLAGS='--sysroot ROOT -L ROOT/lib -Wl,-Bstatic
# when not using the above flags, link GMP dynamically/as usual. Always link ICU dynamically.
echo -n " -DLEAN_EXTRA_LINKER_FLAGS='-lgmp $(pkg-config --libs libuv) -lucrtbase'"
# do not set `LEAN_CC` for tests
echo -n " -DAUTO_THREAD_FINALIZATION=OFF -DSTAGE0_AUTO_THREAD_FINALIZATION=OFF"
echo -n " -DLEAN_TEST_VARS=''"

View File

@@ -3,32 +3,6 @@ import sys
import subprocess
import requests
def check_gh_auth():
"""Check if GitHub CLI is properly authenticated."""
try:
result = subprocess.run(["gh", "auth", "status"], capture_output=True, text=True)
if result.returncode != 0:
return False, result.stderr
return True, None
except FileNotFoundError:
return False, "GitHub CLI (gh) is not installed. Please install it first."
except Exception as e:
return False, f"Error checking authentication: {e}"
def handle_gh_error(error_output):
"""Handle GitHub CLI errors and provide helpful messages."""
if "Not Found (HTTP 404)" in error_output:
return "Repository not found or access denied. Please check:\n" \
"1. The repository name is correct\n" \
"2. You have access to the repository\n" \
"3. Your GitHub CLI authentication is valid"
elif "Bad credentials" in error_output or "invalid" in error_output.lower():
return "Authentication failed. Please run 'gh auth login' to re-authenticate."
elif "rate limit" in error_output.lower():
return "GitHub API rate limit exceeded. Please try again later."
else:
return f"GitHub API error: {error_output}"
def main():
if len(sys.argv) != 4:
print("Usage: ./push_repo_release_tag.py <repo> <branch> <version_tag>")
@@ -40,13 +14,6 @@ def main():
print(f"Error: Branch '{branch}' is not 'master' or 'main'.")
sys.exit(1)
# Check GitHub CLI authentication first
auth_ok, auth_error = check_gh_auth()
if not auth_ok:
print(f"Authentication error: {auth_error}")
print("\nTo fix this, run: gh auth login")
sys.exit(1)
# Get the `lean-toolchain` file content
lean_toolchain_url = f"https://raw.githubusercontent.com/{repo}/{branch}/lean-toolchain"
try:
@@ -76,23 +43,12 @@ def main():
for tag in existing_tags:
print(tag.replace("refs/tags/", ""))
sys.exit(1)
elif list_tags_output.returncode != 0:
# Handle API errors when listing tags
error_msg = handle_gh_error(list_tags_output.stderr)
print(f"Error checking existing tags: {error_msg}")
sys.exit(1)
# Get the SHA of the branch
get_sha_cmd = [
"gh", "api", f"repos/{repo}/git/ref/heads/{branch}", "--jq", ".object.sha"
]
sha_result = subprocess.run(get_sha_cmd, capture_output=True, text=True)
if sha_result.returncode != 0:
error_msg = handle_gh_error(sha_result.stderr)
print(f"Error getting branch SHA: {error_msg}")
sys.exit(1)
sha_result = subprocess.run(get_sha_cmd, capture_output=True, text=True, check=True)
sha = sha_result.stdout.strip()
# Create the tag
@@ -102,20 +58,11 @@ def main():
"-F", f"ref=refs/tags/{version_tag}",
"-F", f"sha={sha}"
]
create_result = subprocess.run(create_tag_cmd, capture_output=True, text=True)
if create_result.returncode != 0:
error_msg = handle_gh_error(create_result.stderr)
print(f"Error creating tag: {error_msg}")
sys.exit(1)
subprocess.run(create_tag_cmd, capture_output=True, text=True, check=True)
print(f"Successfully created and pushed tag '{version_tag}' to {repo}.")
except subprocess.CalledProcessError as e:
error_msg = handle_gh_error(e.stderr.strip() if e.stderr else str(e))
print(f"Error while creating/pushing tag: {error_msg}")
sys.exit(1)
except Exception as e:
print(f"Unexpected error: {e}")
print(f"Error while creating/pushing tag: {e.stderr.strip() if e.stderr else e}")
sys.exit(1)
if __name__ == "__main__":

Some files were not shown because too many files have changed in this diff Show More