chore: migrate pkg tests (#12889)

Also refactor util.sh in the process, so test scripts become easier to
write (inspired in part by lake's test suite).
This commit is contained in:
Garmelon
2026-03-11 19:55:46 +01:00
committed by GitHub
parent 4740e044c8
commit 6a2a884372
88 changed files with 400 additions and 557 deletions

View File

@@ -7,7 +7,7 @@ Helpful links
------- -------
* [Development Setup](./doc/dev/index.md) * [Development Setup](./doc/dev/index.md)
* [Testing](./doc/dev/testing.md) * [Testing](./tests/README.md)
* [Commit convention](./doc/dev/commit_convention.md) * [Commit convention](./doc/dev/commit_convention.md)
Before You Submit a Pull Request (PR): Before You Submit a Pull Request (PR):

View File

@@ -1,7 +1,9 @@
# Development Workflow # Development Workflow
If you want to make changes to Lean itself, start by [building Lean](../make/index.md) from a clean checkout to make sure that everything is set up correctly. If you want to make changes to Lean itself, start by [building Lean](../make/index.md) from a clean checkout to make sure that everything is set up correctly.
After that, read on below to find out how to set up your editor for changing the Lean source code, followed by further sections of the development manual where applicable such as on the [test suite](testing.md) and [commit convention](commit_convention.md). After that, read on below to find out how to set up your editor for changing the Lean source code,
followed by further sections of the development manual where applicable
such as on the [test suite](../../tests/README.md) and [commit convention](commit_convention.md).
If you are planning to make any changes that may affect the compilation of Lean itself, e.g. changes to the parser, elaborator, or compiler, you should first read about the [bootstrapping pipeline](bootstrap.md). If you are planning to make any changes that may affect the compilation of Lean itself, e.g. changes to the parser, elaborator, or compiler, you should first read about the [bootstrapping pipeline](bootstrap.md).
You should not edit the `stage0` directory except using the commands described in that section when necessary. You should not edit the `stage0` directory except using the commands described in that section when necessary.

View File

@@ -1,142 +0,0 @@
# Test Suite
**Warning:** This document is partially outdated.
It describes the old test suite, which is currently in the process of being replaced.
The new test suite's documentation can be found at [`tests/README.md`](../../tests/README.md).
After [building Lean](../make/index.md) you can run all the tests using
```
cd build/release
make test ARGS=-j4
```
Change the 4 to the maximum number of parallel tests you want to
allow. The best choice is the number of CPU cores on your machine as
the tests are mostly CPU bound. You can find the number of processors
on linux using `nproc` and on Windows it is the `NUMBER_OF_PROCESSORS`
environment variable.
You can run tests after [building a specific stage](bootstrap.md) by
adding the `-C stageN` argument. The default when run as above is stage 1. The
Lean tests will automatically use that stage's corresponding Lean
executables
Running `make test` will not pick up new test files; run
```bash
cmake build/release/stage1
```
to update the list of tests.
You can also use `ctest` directly if you are in the right folder. So
to run stage1 tests with a 300 second timeout run this:
```bash
cd build/release/stage1
ctest -j 4 --output-on-failure --timeout 300
```
Useful `ctest` flags are `-R <name of test>` to run a single test, and
`--rerun-failed` to run all tests that failed during the last run.
You can also pass `ctest` flags via `make test ARGS="--rerun-failed"`.
To get verbose output from ctest pass the `--verbose` command line
option. Test output is normally suppressed and only summary
information is displayed. This option will show all test output.
## Test Suite Organization
All these tests are included by [src/shell/CMakeLists.txt](https://github.com/leanprover/lean4/blob/master/src/shell/CMakeLists.txt):
- [`tests/lean`](https://github.com/leanprover/lean4/tree/master/tests/lean/): contains tests that come equipped with a
.lean.expected.out file. The driver script [`test_single.sh`](https://github.com/leanprover/lean4/tree/master/tests/lean/test_single.sh) runs
each test and checks the actual output (*.produced.out) with the
checked in expected output.
- [`tests/lean/run`](https://github.com/leanprover/lean4/tree/master/tests/lean/run/): contains tests that are run through the lean
command line one file at a time. These tests only look for error
codes and do not check the expected output even though output is
produced, it is ignored.
**Note:** Tests in this directory run with `-Dlinter.all=false` to reduce noise.
If your test needs to verify linter behavior (e.g., deprecation warnings),
explicitly enable the relevant linter with `set_option linter.<name> true`.
- [`tests/lean/interactive`](https://github.com/leanprover/lean4/tree/master/tests/lean/interactive/): are designed to test server requests at a
given position in the input file. Each .lean file contains comments
that indicate how to simulate a client request at that position.
using a `--^` point to the line position. Example:
```lean,ignore
open Foo in
theorem tst2 (h : a ≤ b) : a + 2 ≤ b + 2 :=
Bla.
--^ completion
```
In this example, the test driver [`test_single.sh`](https://github.com/leanprover/lean4/tree/master/tests/lean/interactive/test_single.sh) will simulate an
auto-completion request at `Bla.`. The expected output is stored in
a .lean.expected.out in the json format that is part of the
[Language Server
Protocol](https://microsoft.github.io/language-server-protocol/).
This can also be used to test the following additional requests:
```
--^ textDocument/hover
--^ textDocument/typeDefinition
--^ textDocument/definition
--^ $/lean/plainGoal
--^ $/lean/plainTermGoal
--^ insert: ...
--^ collectDiagnostics
```
- [`tests/lean/server`](https://github.com/leanprover/lean4/tree/master/tests/lean/server/): Tests more of the Lean `--server` protocol.
There are just a few of them, and it uses .log files containing
JSON.
- [`tests/compiler`](https://github.com/leanprover/lean4/tree/master/tests/compiler/): contains tests that will run the Lean compiler and
build an executable that is executed and the output is compared to
the .lean.expected.out file. This test also contains a subfolder
[`foreign`](https://github.com/leanprover/lean4/tree/master/tests/compiler/foreign/) which shows how to extend Lean using C++.
- [`tests/lean/trust0`](https://github.com/leanprover/lean4/tree/master/tests/lean/trust0): tests that run Lean in a mode that Lean doesn't
even trust the .olean files (i.e., trust 0).
- [`tests/bench`](https://github.com/leanprover/lean4/tree/master/tests/bench/): contains performance tests.
- [`tests/plugin`](https://github.com/leanprover/lean4/tree/master/tests/plugin/): tests that compiled Lean code can be loaded into
`lean` via the `--plugin` command line option.
## Writing Good Tests
Every test file should contain:
* an initial `/-! -/` module docstring summarizing the test's purpose
* a module docstring for each test section that describes what is tested
and, if not 100% clear, why that is the desirable behavior
At the time of writing, most tests do not follow these new guidelines yet.
For an example of a conforming test, see [`tests/lean/1971.lean`](https://github.com/leanprover/lean4/tree/master/tests/lean/1971.lean).
## Fixing Tests
When the Lean source code or the standard library are modified, some of the
tests break because the produced output is slightly different, and we have
to reflect the changes in the `.lean.expected.out` files.
We should not blindly copy the new produced output since we may accidentally
miss a bug introduced by recent changes.
The test suite contains commands that allow us to see what changed in a convenient way.
First, we must install [meld](http://meldmerge.org/). On Ubuntu, we can do it by simply executing
```
sudo apt-get install meld
```
Now, suppose `bad_class.lean` test is broken. We can see the problem by going to [`tests/lean`](https://github.com/leanprover/lean4/tree/master/tests/lean) directory and
executing
```
./test_single.sh -i bad_class.lean
```
When the `-i` option is provided, `meld` is automatically invoked
whenever there is discrepancy between the produced and expected
outputs. `meld` can also be used to repair the problems.
In Emacs, we can also execute `M-x lean4-diff-test-file` to check/diff the file of the current buffer.
To mass-copy all `.produced.out` files to the respective `.expected.out` file, use `tests/lean/copy-produced`.

View File

@@ -1,11 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../../tests/env_test.sh source ../../../tests/env_test.sh
source "$TEST_DIR/util.sh"
leanmake --always-make bin leanmake --always-make bin
exec_capture test.lean \ capture ./build/bin/test hello world
./build/bin/test hello world check_out_contains "[hello, world]"
check_exit test.lean
check_out test.lean

View File

@@ -1,9 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../tests/env_test.sh source ../../tests/env_test.sh
source "$TEST_DIR/util.sh"
exec_capture "$1" \ capture_only "$1" \
lean -Dlinter.all=false "$1" lean -Dlinter.all=false "$1"
check_exit_is_success
check_exit "$1" check_out_file
check_out "$1"

View File

@@ -50,6 +50,24 @@ endblock()
## Helper functions ## ## Helper functions ##
###################### ######################
function(glob_except OUT DIR GLOB)
cmake_parse_arguments(ARGS "" "" EXCEPT ${ARGN})
cmake_path(ABSOLUTE_PATH DIR NORMALIZE)
file(GLOB ALL_FILES "${DIR}/${GLOB}")
set(RETAINED_FILES "")
foreach(FILE IN LISTS ALL_FILES)
cmake_path(RELATIVE_PATH FILE BASE_DIRECTORY "${DIR}" OUTPUT_VARIABLE FILE_IN_DIR)
if(NOT FILE_IN_DIR IN_LIST ARGS_EXCEPT)
list(APPEND RETAINED_FILES "${FILE}")
endif()
endforeach()
# Replace with return(PROPAGATE) if we ever update to cmake 3.25+
set("${OUT}" "${RETAINED_FILES}" PARENT_SCOPE)
endfunction()
function(check_test_bench_scripts DIR) function(check_test_bench_scripts DIR)
set(RUN_TEST "${DIR}/run_test") set(RUN_TEST "${DIR}/run_test")
set(RUN_BENCH "${DIR}/run_bench") set(RUN_BENCH "${DIR}/run_bench")
@@ -117,7 +135,7 @@ endfunction()
# a single `measurements.jsonl` file in the pile directory, whose path will be # a single `measurements.jsonl` file in the pile directory, whose path will be
# added to the list specified by the BENCH argument. # added to the list specified by the BENCH argument.
function(add_test_pile DIR GLOB) function(add_test_pile DIR GLOB)
cmake_parse_arguments(ARGS "" BENCH "" ${ARGN}) cmake_parse_arguments(ARGS "" BENCH EXCEPT ${ARGN})
cmake_path(ABSOLUTE_PATH DIR NORMALIZE) cmake_path(ABSOLUTE_PATH DIR NORMALIZE)
check_test_bench_scripts("${DIR}") check_test_bench_scripts("${DIR}")
@@ -128,7 +146,7 @@ function(add_test_pile DIR GLOB)
set(MEASUREMENTS_FILES "") set(MEASUREMENTS_FILES "")
# Iterate over all files matching the glob # Iterate over all files matching the glob
file(GLOB TEST_FILES "${DIR}/${GLOB}") glob_except(TEST_FILES "${DIR}" "${GLOB}" EXCEPT "${ARGS_EXCEPT}")
foreach(FILE IN LISTS TEST_FILES) foreach(FILE IN LISTS TEST_FILES)
cmake_path(RELATIVE_PATH FILE OUTPUT_VARIABLE FILE_IN_SOURCE_DIR) cmake_path(RELATIVE_PATH FILE OUTPUT_VARIABLE FILE_IN_SOURCE_DIR)
cmake_path(RELATIVE_PATH FILE BASE_DIRECTORY "${DIR}" OUTPUT_VARIABLE FILE_IN_DIR) cmake_path(RELATIVE_PATH FILE BASE_DIRECTORY "${DIR}" OUTPUT_VARIABLE FILE_IN_DIR)
@@ -204,8 +222,10 @@ function(add_test_dir DIR)
endfunction() endfunction()
function(add_dir_of_test_dirs DIR) function(add_dir_of_test_dirs DIR)
cmake_parse_arguments(ARGS "" "" EXCEPT ${ARGN})
cmake_path(ABSOLUTE_PATH DIR NORMALIZE) cmake_path(ABSOLUTE_PATH DIR NORMALIZE)
file(GLOB TEST_DIRS "${DIR}/*")
glob_except(TEST_DIRS "${DIR}" "*" EXCEPT "${ARGS_EXCEPT}")
foreach(TEST_DIR IN LISTS TEST_DIRS) foreach(TEST_DIR IN LISTS TEST_DIRS)
if(IS_DIRECTORY "${TEST_DIR}") if(IS_DIRECTORY "${TEST_DIR}")
add_test_dir("${TEST_DIR}") add_test_dir("${TEST_DIR}")
@@ -222,16 +242,6 @@ set(PART2 "")
## Tests and benchmarks ## ## Tests and benchmarks ##
########################## ##########################
# LEAN PACKAGE TESTS
file(GLOB LEANPKGTESTS "${LEAN_SOURCE_DIR}/../tests/pkg/*")
foreach(T ${LEANPKGTESTS})
if(EXISTS "${T}/test.sh")
get_filename_component(T_NAME ${T} NAME)
cmake_path(RELATIVE_PATH T BASE_DIRECTORY ${LEAN_SOURCE_DIR}/.. OUTPUT_VARIABLE T_PATH)
add_test(NAME "${T_PATH}" WORKING_DIRECTORY "${T}" COMMAND bash -c "${TEST_VARS} ./test.sh")
endif()
endforeach(T)
# Create a lake test for each test and examples subdirectory of `lake` # Create a lake test for each test and examples subdirectory of `lake`
# which contains a `test.sh` file, excluding the following test(s): # which contains a `test.sh` file, excluding the following test(s):
# bootstrap: too slow # bootstrap: too slow
@@ -293,6 +303,13 @@ add_test_dir(bench/size BENCH PART1)
add_test_dir(lake_bench/inundation BENCH PART2) add_test_dir(lake_bench/inundation BENCH PART2)
add_dir_of_test_dirs(misc_dir) add_dir_of_test_dirs(misc_dir)
add_dir_of_test_dirs(
pkg
EXCEPT
signal # Test appears to be broken
test_extern # Flaky
user_ext # Test started being nondeterministic
)
####################### #######################
## Benchmark targets ## ## Benchmark targets ##

View File

@@ -1,9 +1,6 @@
# Test suite # Test suite
This directory contains the lean test and benchmark suite. This directory contains the lean test and benchmark suite.
It is currently in the process of being migrated to the framework described in this file.
Some tests still use the previous framework,
which is partially documented in [testing.md](../doc/dev/testing.md).
The test suite consists of two types of directories: Test directories and test piles. The test suite consists of two types of directories: Test directories and test piles.
@@ -45,12 +42,17 @@ Benchmarks belonging to the old framework are not included in this description.
These are also executed as part of the test suite, and `.out.expected` files are ignored when benchmarking. These are also executed as part of the test suite, and `.out.expected` files are ignored when benchmarking.
- `server`, `server_interactive`: - `server`, `server_interactive`:
Test LSP server requests. Test LSP server requests.
- `lake:`
Test suite for lake.
It is mostly isolated from the rest of the test suite and follows its own conventions.
- `lake_bench`: - `lake_bench`:
Benchmark directories that measure lake performance. Benchmark directories that measure lake performance.
- `misc`: - `misc`:
A collection of miscellaneous small test scripts. A collection of miscellaneous small test scripts.
- `misc_bench`: - `misc_bench`:
A collection of miscellaneous small benchmark scripts. A collection of miscellaneous small benchmark scripts.
- `pkg`:
Tests that run in lake packages.
## How to run the test suite? ## How to run the test suite?
@@ -179,7 +181,7 @@ The most notable ones are:
- `SCRIPT_DIR`: Absolute path to the `script` directory. - `SCRIPT_DIR`: Absolute path to the `script` directory.
- `TEST_BENCH`: Set to `1` if we're currently executing a benchmark, unset otherwise. - `TEST_BENCH`: Set to `1` if we're currently executing a benchmark, unset otherwise.
Finally, the run script should source `"$TEST_DIR/util.sh"`, It also sources `"$TEST_DIR/util.sh"`,
which provides a few utility functions and also uses `set` to set sensible bash defaults. which provides a few utility functions and also uses `set` to set sensible bash defaults.
See `util.sh` for the available utility functions. See `util.sh` for the available utility functions.
@@ -218,16 +220,15 @@ These files are available to configure a test:
- `<file>.out.ignored`: - `<file>.out.ignored`:
Ignore the test's output entirely; don't compare it to `<file>.out.expected`. Ignore the test's output entirely; don't compare it to `<file>.out.expected`.
- `<file>.exit.expected`:
The test fails if its exit code doesn't match this file's contents.
If this file isn't present, the pile's default exit code is used instead.
If this file contains the text `nonzero`, the test's exit code must not be 0.
These bash variables (set via `<file>.init.sh`) are used by the run script: These bash variables (set via `<file>.init.sh`) are used by the run script:
- `TEST_LEAN_ARGS`: - `TEST_LEAN_ARGS`:
A bash array of additional arguments to the `lean` command. A bash array of additional arguments to the `lean` command.
- `TEST_EXIT`:
A bash variable containing the expected exit code of the program.
When set to `nonzero` instead of a numerical value, the exit code must not be 0.
## The `compile*` test pile ## The `compile*` test pile
These files are available to configure a test: These files are available to configure a test:
@@ -260,11 +261,6 @@ These files are available to configure a test:
- `<file>.out.ignored`: - `<file>.out.ignored`:
Ignore the test's output entirely; don't compare it to `<file>.out.expected`. Ignore the test's output entirely; don't compare it to `<file>.out.expected`.
- `<file>.exit.expected`:
The test fails if its exit code doesn't match this file's contents.
If this file isn't present, the test's exit code must be 0.
If this file contains the text `nonzero`, the test's exit code must not be 0.
These bash variables (set via `<file>.init.sh`) are used by the run script: These bash variables (set via `<file>.init.sh`) are used by the run script:
- `TEST_LEAN_ARGS`: - `TEST_LEAN_ARGS`:
@@ -280,6 +276,10 @@ These bash variables (set via `<file>.init.sh`) are used by the run script:
A bash array of arguments to the compiled (or interpreted) program. A bash array of arguments to the compiled (or interpreted) program.
Check `TEST_BENCH` if you want to specify more intense parameters for benchmarks. Check `TEST_BENCH` if you want to specify more intense parameters for benchmarks.
- `TEST_EXIT`:
A bash variable containing the expected exit code of the program.
When set to `nonzero` instead of a numerical value, the exit code must not be 0.
## The `interactive` test pile ## The `interactive` test pile
These tests are designed to test LSP server requests at a given position in the input file. These tests are designed to test LSP server requests at a given position in the input file.

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_bench.sh source ../../env_bench.sh
source "$TEST_DIR/util.sh"
STAGE_THIS="stage$STAGE" STAGE_THIS="stage$STAGE"
STAGE_NEXT="stage$((STAGE + 1))" STAGE_NEXT="stage$((STAGE + 1))"

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_bench.sh source ../../env_bench.sh
source "$TEST_DIR/util.sh"
# This should run in the same environment as run_bench, otherwise `lakeprof` # This should run in the same environment as run_bench, otherwise `lakeprof`
# will use the `lake` from the global system `elan` install and not the one from # will use the `lake` from the global system `elan` install and not the one from

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../../env_bench.sh source ../../../env_bench.sh
source "$TEST_DIR/util.sh"
rm -f measurements.jsonl rm -f measurements.jsonl

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../../env_test.sh source ../../../env_test.sh
source "$TEST_DIR/util.sh"
# Limit threads to avoid exhausting memory with the large thread stack. # Limit threads to avoid exhausting memory with the large thread stack.
LEAN_NUM_THREADS=1 lake test LEAN_NUM_THREADS=1 lake test

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_bench.sh source ../../env_bench.sh
source "$TEST_DIR/util.sh"
make -C "$BUILD_DIR" install DESTDIR="$(realpath install)" make -C "$BUILD_DIR" install DESTDIR="$(realpath install)"
python measure_sizes.py "$SRC_DIR" "$BUILD_DIR" install measurements.jsonl python measure_sizes.py "$SRC_DIR" "$BUILD_DIR" install measurements.jsonl

View File

@@ -1,107 +0,0 @@
set -euo pipefail
ulimit -s ${MAIN_STACK_SIZE:-8192}
DIFF=diff
if diff --color --help >/dev/null 2>&1; then
DIFF="diff --color";
fi
function fail {
echo $1
exit 1
}
INTERACTIVE=no
if [ $1 == "-i" ]; then
INTERACTIVE=yes
shift
fi
f="$1"
shift
[ $# -eq 0 ] || fail "Usage: test_single.sh [-i] test-file.lean"
function lean_has_llvm_support {
lean --features | grep -q "LLVM"
}
function compile_lean_c_backend {
lean --c="$f.c" "$f" || fail "Failed to compile $f into C file"
leanc ${LEANC_OPTS-} -O3 -DNDEBUG -o "$f.out" "$@" "$f.c" || fail "Failed to compile C file $f.c"
}
function compile_lean_llvm_backend {
set -o xtrace
rm "*.ll" || true # remove debugging files.
rm "*.bc" || true # remove bitcode files
rm "*.o" || true # remove object files
lean --bc="$f.linked.bc" "$f" || fail "Failed to compile $f into bitcode file"
leanc ${LEANC_OPTS-} -O3 -DNDEBUG -o "$f.out" "$@" "$f.linked.bc" || fail "Failed to link object file '$f.linked.bc'"
set +o xtrace
}
function exec_capture_raw {
# backtraces are system-specific, strip them (might be captured in `#guard_msgs`)
LEAN_BACKTRACE=0 "$@" 2>&1 > "$f.produced.out"
}
# produces filtered output intended for usage with `diff_produced`
function exec_capture {
# backtraces are system-specific, strip them
# mvar suffixes like in `?m.123` are deterministic but prone to change on minor changes, so strip them
# similarly, links to the language reference may have URL components depending on the toolchain, so normalize those
LEAN_BACKTRACE=0 "$@" 2>&1 \
| perl -pe 's/(\?(\w|_\w+))\.[0-9]+/\1/g' \
| perl -pe 's/https:\/\/lean-lang\.org\/doc\/reference\/(v?[0-9.]+(-rc[0-9]+)?|latest)/REFERENCE/g' > "$f.produced.out"
}
# Remark: `${var+x}` is a parameter expansion which evaluates to nothing if `var` is unset, and substitutes the string `x` otherwise.
function check_ret {
[ -n "${expected_ret+x}" ] || expected_ret=0
[ -f "$f.expected.ret" ] && expected_ret=$(< "$f.expected.ret")
if [ -n "$expected_ret" ] && [ $ret -ne $expected_ret ]; then
echo "Unexpected return code $ret executing '$@'; expected $expected_ret. Output:"
cat "$f.produced.out"
exit 1
fi
}
function exec_check_raw {
ret=0
exec_capture_raw "$@" || ret=$?
check_ret "$@"
}
# produces filtered output intended for usage with `diff_produced`
function exec_check {
ret=0
exec_capture "$@" || ret=$?
check_ret "$@"
}
function diff_produced {
if test -f "$f.expected.out"; then
if $DIFF -au --strip-trailing-cr -I "executing external script" "$f.expected.out" "$f.produced.out"; then
:
else
echo "ERROR: file $f.produced.out does not match $f.expected.out"
if [ $INTERACTIVE == "yes" ]; then
meld "$f.produced.out" "$f.expected.out"
if diff -I "executing external script" "$f.expected.out" "$f.produced.out"; then
echo "-- mismatch was fixed"
fi
fi
exit 1
fi
else
echo "ERROR: file $f.expected.out does not exist"
if [ $INTERACTIVE == "yes" ]; then
read -p "copy $f.produced.out (y/n)? "
if [ $REPLY == "y" ]; then
cp -- "$f.produced.out" "$f.expected.out"
echo "-- copied $f.produced.out --> $f.expected.out"
fi
fi
exit 1
fi
}

View File

@@ -1 +0,0 @@
nonzero

View File

@@ -0,0 +1 @@
TEST_EXIT=nonzero

View File

@@ -1 +0,0 @@
nonzero

View File

@@ -0,0 +1 @@
TEST_EXIT=nonzero

View File

@@ -1 +0,0 @@
1

View File

@@ -0,0 +1 @@
TEST_EXIT=1

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_bench.sh source ../env_bench.sh
source "$TEST_DIR/util.sh"
source_init "$1" source_init "$1"
if [[ -f "$1.do_compile_bench" ]]; then DO_COMPILE=1 if [[ -f "$1.do_compile_bench" ]]; then DO_COMPILE=1

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
source_init "$1" source_init "$1"
if [[ -f "$1.do_compile_test" ]]; then DO_COMPILE=1 if [[ -f "$1.do_compile_test" ]]; then DO_COMPILE=1
@@ -23,11 +22,12 @@ if [[ -n $DO_COMPILE ]]; then
lean --c="$1.c" "${TEST_LEAN_ARGS[@]}" "$1" || fail "Failed to compile $1 into $1.c" lean --c="$1.c" "${TEST_LEAN_ARGS[@]}" "$1" || fail "Failed to compile $1 into $1.c"
leanc ${LEANC_OPTS-} -O3 -DNDEBUG -o "$1.out" "${TEST_LEANC_ARGS[@]}" "$1.c" || fail "Failed to compile $1.c" leanc ${LEANC_OPTS-} -O3 -DNDEBUG -o "$1.out" "${TEST_LEANC_ARGS[@]}" "$1.c" || fail "Failed to compile $1.c"
exec_capture "$1" "./$1.out" "${TEST_ARGS[@]}"
normalize_measurements "$1" capture_only "$1" \
check_exit "$1" "./$1.out" "${TEST_ARGS[@]}"
check_out "$1" check_exit_is "${TEST_EXIT:-0}"
normalize_measurements
check_out_file
run_after "$1" run_after "$1"
fi fi
@@ -36,12 +36,11 @@ if [[ -n $DO_INTERPRET ]]; then
echo "Interpreting lean file" echo "Interpreting lean file"
run_before "$1" run_before "$1"
exec_capture "$1" \ capture_only "$1" \
lean -Dlinter.all=false "${TEST_LEANI_ARGS[@]}" --run "$1" "${TEST_ARGS[@]}" lean -Dlinter.all=false "${TEST_LEANI_ARGS[@]}" --run "$1" "${TEST_ARGS[@]}"
check_exit_is "${TEST_EXIT:-0}"
normalize_measurements "$1" normalize_measurements
check_exit "$1" check_out_file
check_out "$1"
run_after "$1" run_after "$1"
fi fi

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_bench.sh source ../env_bench.sh
source "$TEST_DIR/util.sh"
source_init "$1" source_init "$1"
if [[ -f "$1.do_compile_bench" ]]; then DO_COMPILE=1 if [[ -f "$1.do_compile_bench" ]]; then DO_COMPILE=1
@@ -27,18 +26,18 @@ if [[ -n $DO_COMPILE ]]; then
lean --c="$1.c" "${TEST_LEAN_ARGS[@]}" "$1" || fail "Failed to compile $1 into $1.c" lean --c="$1.c" "${TEST_LEAN_ARGS[@]}" "$1" || fail "Failed to compile $1 into $1.c"
leanc ${LEANC_OPTS-} -O3 -DNDEBUG -o "$1.out" "${TEST_LEANC_ARGS[@]}" "$1.c" || fail "Failed to compile $1.c" leanc ${LEANC_OPTS-} -O3 -DNDEBUG -o "$1.out" "${TEST_LEANC_ARGS[@]}" "$1.c" || fail "Failed to compile $1.c"
exec_capture "$1" \
capture_only "$1" \
"$TEST_DIR/measure.py" -t "$TOPIC" -o "$1.measurements.jsonl" -a -d -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -o "$1.measurements.jsonl" -a -d -- \
"./$1.out" "${TEST_ARGS[@]}" "./$1.out" "${TEST_ARGS[@]}"
check_exit_is "${TEST_EXIT:-0}"
extract_measurements "$TOPIC"
# Measure .out binary size # Measure .out binary size
wc -c "$1.out" \ wc -c "$1.out" \
| jq -R 'split(" ")[0] | tonumber | {metric: "size/compile/.out//bytes", value: ., unit: "B"}' -c \ | jq -R 'split(" ")[0] | tonumber | {metric: "size/compile/.out//bytes", value: ., unit: "B"}' -c \
>> "$1.measurements.jsonl" >> "$1.measurements.jsonl"
extract_measurements "$1" "$TOPIC"
check_exit "$1"
run_after "$1" run_after "$1"
fi fi
@@ -48,12 +47,11 @@ if [[ -n $DO_INTERPRET ]]; then
TOPIC="interpreted/$(basename "$1" .lean)" TOPIC="interpreted/$(basename "$1" .lean)"
exec_capture "$1" \ capture_only "$1" \
"$TEST_DIR/measure.py" -t "$TOPIC" -o "$1.measurements.jsonl" -a -d -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -o "$1.measurements.jsonl" -a -d -- \
lean -Dlinter.all=false "${TEST_LEANI_ARGS[@]}" --run "$1" "${TEST_ARGS[@]}" lean -Dlinter.all=false "${TEST_LEANI_ARGS[@]}" --run "$1" "${TEST_ARGS[@]}"
check_exit_is "${TEST_EXIT:-0}"
extract_measurements "$1" "$TOPIC" extract_measurements "$TOPIC"
check_exit "$1"
run_after "$1" run_after "$1"
fi fi

View File

@@ -1,13 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
# IO.Process.exit (used by the file worker) seems to be incompatible with LSAN # IO.Process.exit (used by the file worker) seems to be incompatible with LSAN
# TODO: investigate or work around # TODO: investigate or work around
export ASAN_OPTIONS=detect_leaks=0 export ASAN_OPTIONS=detect_leaks=0
exec_capture "$1" \ capture_only "$1" \
lean -Dlinter.all=false --run run_test.lean "$1" lean -Dlinter.all=false --run run_test.lean "$1"
check_exit_is_success
check_exit "$1" check_out_file
check_out "$1"

View File

@@ -0,0 +1 @@
export LEAN_BACKTRACE=0

View File

@@ -0,0 +1 @@
export LEAN_BACKTRACE=0

View File

@@ -0,0 +1 @@
export LEAN_BACKTRACE=0

View File

@@ -1,19 +1,17 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
source_init "$1" source_init "$1"
run_before "$1" run_before "$1"
# `--root` to infer same private names as in the server # `--root` to infer same private names as in the server
# Elab.inServer to allow for arbitrary `#eval` # Elab.inServer to allow for arbitrary `#eval`
exec_capture "$1" \ capture_only "$1" \
lean --root=.. -DprintMessageEndPos=true -Dlinter.all=false -DElab.inServer=true "${TEST_LEAN_ARGS[@]}" "$1" lean --root=.. -DprintMessageEndPos=true -Dlinter.all=false -DElab.inServer=true "${TEST_LEAN_ARGS[@]}" "$1"
check_exit_is_success
normalize_mvar_suffixes "$1" normalize_mvar_suffixes
normalize_reference_urls "$1" normalize_reference_urls
normalize_measurements "$1" normalize_measurements
check_exit "$1" check_out_file
check_out "$1"
run_after "$1" run_after "$1"

View File

@@ -1,19 +1,20 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_bench.sh source ../env_bench.sh
source "$TEST_DIR/util.sh"
source_init "$1" source_init "$1"
run_before "$1" run_before "$1"
TOPIC="elab/$(basename "$1" .lean)" TOPIC="elab/$(basename "$1" .lean)"
exec_capture "$1" \ # `--root` to infer same private names as in the server
# Elab.inServer to allow for arbitrary `#eval`
capture_only "$1" \
"$TEST_DIR/measure.py" -t "$TOPIC" -o "$1.measurements.jsonl" -d -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -o "$1.measurements.jsonl" -d -- \
lean --root=.. -DprintMessageEndPos=true -Dlinter.all=false -DElab.inServer=true "${TEST_LEAN_ARGS[@]}" "$1" lean --root=.. -DprintMessageEndPos=true -Dlinter.all=false -DElab.inServer=true "${TEST_LEAN_ARGS[@]}" "$1"
check_exit_is_success
normalize_mvar_suffixes "$1" normalize_mvar_suffixes
normalize_reference_urls "$1" normalize_reference_urls
extract_measurements "$1" "$TOPIC" extract_measurements "$TOPIC"
check_exit "$1" check_out_file
run_after "$1" run_after "$1"

View File

@@ -1,19 +1,17 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
source_init "$1" source_init "$1"
run_before "$1" run_before "$1"
# `--root` to infer same private names as in the server # `--root` to infer same private names as in the server
# Elab.inServer to allow for arbitrary `#eval` # Elab.inServer to allow for arbitrary `#eval`
exec_capture "$1" \ capture_only "$1" \
lean --root=.. -DprintMessageEndPos=true -Dlinter.all=false -DElab.inServer=true "${TEST_LEAN_ARGS[@]}" "$1" lean --root=.. -DprintMessageEndPos=true -Dlinter.all=false -DElab.inServer=true "${TEST_LEAN_ARGS[@]}" "$1"
check_exit_is_fail
normalize_mvar_suffixes "$1" normalize_mvar_suffixes
normalize_reference_urls "$1" normalize_reference_urls
normalize_measurements "$1" normalize_measurements
check_exit "$1" 1 check_out_file
check_out "$1"
run_after "$1" run_after "$1"

View File

@@ -1,2 +1,3 @@
#!/usr/bin/env bash #!/usr/bin/env bash
export ${TEST_VARS} export ${TEST_VARS}
source "$TEST_DIR/util.sh"

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_bench.sh source ../../env_bench.sh
source "$TEST_DIR/util.sh"
PREFIX="lake/inundation" PREFIX="lake/inundation"
rm -f measurements.jsonl rm -f measurements.jsonl

View File

@@ -1,2 +1,2 @@
lean --githash run lean --githash
lean -g run lean -g

View File

@@ -1,2 +1,2 @@
lean --help run lean --help
lean -h run lean -h

View File

@@ -1 +1 @@
fail_if_success lean lean_unknown_file.sh.doesnotexist.lean run_fail lean lean_unknown_file.sh.doesnotexist.lean

View File

@@ -1 +1 @@
fail_if_success lean -z run_fail lean -z

View File

@@ -1,3 +1,3 @@
lean --version run lean --version
lean -v run lean -v
lean --v # "v" is unambiguous prefix of "version", so opt parser allows it run lean --v # "v" is unambiguous prefix of "version", so opt parser allows it

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
NAME="$1" NAME="$1"
FILE="$(realpath "$1")" FILE="$(realpath "$1")"

View File

@@ -1,7 +1,7 @@
cd ../../src cd ../../src
exec_capture "$FILE" \ capture_only "$FILE" \
"$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \
lean --run "$SCRIPT_DIR/benchReelabRss.lean" lean Init/Data/BitVec/Lemmas.lean 3 -j4 lean --run "$SCRIPT_DIR/benchReelabRss.lean" lean Init/Data/BitVec/Lemmas.lean 3 -j4
check_exit_is_success
extract_measurements "$FILE" "$TOPIC" extract_measurements "$TOPIC"

View File

@@ -1,8 +1,8 @@
# This benchmark uncovered the promise cycle in `realizeConst` (#11328) # This benchmark uncovered the promise cycle in `realizeConst` (#11328)
cd ../../src cd ../../src
exec_capture "$FILE" \ capture_only "$FILE" \
"$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \
lean --run "$SCRIPT_DIR/benchReelabRss.lean" lean Init/Data/List/Basic.lean 10 -j4 lean --run "$SCRIPT_DIR/benchReelabRss.lean" lean Init/Data/List/Basic.lean 10 -j4
check_exit_is_success
extract_measurements "$FILE" "$TOPIC" extract_measurements "$TOPIC"

View File

@@ -1,7 +1,7 @@
cd ../../src cd ../../src
exec_capture "$FILE" \ capture_only "$FILE" \
"$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \
lean --run "$SCRIPT_DIR/benchReelabRss.lean" lean Init/Data/List/Sublist.lean 10 -j4 lean --run "$SCRIPT_DIR/benchReelabRss.lean" lean Init/Data/List/Sublist.lean 10 -j4
check_exit_is_success
extract_measurements "$FILE" "$TOPIC" extract_measurements "$TOPIC"

View File

@@ -1,7 +1,7 @@
cd ../../src cd ../../src
exec_capture "$FILE" \ capture_only "$FILE" \
"$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \ "$TEST_DIR/measure.py" -t "$TOPIC" -d -o "$OUT" -- \
lean --run "$SCRIPT_DIR/benchReelabWatchdogRss.lean" lean Init/Data/List/Sublist.lean 10 -j4 lean --run "$SCRIPT_DIR/benchReelabWatchdogRss.lean" lean Init/Data/List/Sublist.lean 10 -j4
check_exit_is_success
extract_measurements "$FILE" "$TOPIC" extract_measurements "$TOPIC"

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_bench.sh source ../env_bench.sh
source "$TEST_DIR/util.sh"
NAME="$1" NAME="$1"
FILE="$(realpath "$1")" FILE="$(realpath "$1")"

View File

@@ -1,11 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh source ../../env_test.sh
source "$TEST_DIR/util.sh"
# LEAN_EXPORTING needs to be defined for .c files included in shared libraries # LEAN_EXPORTING needs to be defined for .c files included in shared libraries
lean --c=SnakeLinter.c SnakeLinter.lean lean --c=SnakeLinter.c SnakeLinter.lean
leanc ${LEANC_OPTS-} -O3 -DNDEBUG -DLEAN_EXPORTING -shared -o SnakeLinter.so SnakeLinter.c leanc ${LEANC_OPTS-} -O3 -DNDEBUG -DLEAN_EXPORTING -shared -o SnakeLinter.so SnakeLinter.c
exec_capture SnakeLinter.lean lean -Dlinter.all=false --plugin=SnakeLinter.so SnakeLinter.lean capture_only SnakeLinter.lean \
check_exit SnakeLinter.lean 1 lean -Dlinter.all=false --plugin=SnakeLinter.so SnakeLinter.lean
check_out SnakeLinter.lean check_exit_is_fail
check_out_file

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh source ../../env_test.sh
source "$TEST_DIR/util.sh"
# IO.Process.exit (used by the file worker) seems to be incompatible with LSAN # IO.Process.exit (used by the file worker) seems to be incompatible with LSAN
# TODO: investigate or work around # TODO: investigate or work around
@@ -9,6 +8,7 @@ export ASAN_OPTIONS=detect_leaks=0
lake build lake build
FILE=InverseModuleHierarchy/BasicTest.lean FILE=InverseModuleHierarchy/BasicTest.lean
exec_capture "$FILE" lean -Dlinter.all=false --run run_test.lean -p "$(realpath "$FILE")" capture_only "$FILE" \
check_exit "$FILE" lean -Dlinter.all=false --run run_test.lean -p "$(realpath "$FILE")"
check_out "$FILE" check_exit_is_success
check_out_file

View File

@@ -1,2 +1 @@
lake-manifest.json lake-manifest.json
produced.out

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake build lake build

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake build lake build

7
tests/pkg/debug/run_test Executable file
View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
run lake exe release
run_fail lake exe debug

View File

@@ -1,8 +0,0 @@
#!/usr/bin/env bash
rm -rf .lake/build
# should succeed
lake exe release || exit 1
# should fail
lake exe debug && exit 1
exit 0

View File

@@ -1,2 +1,2 @@
rm -rf lake-manifest.json .lake/build produced.out rm -rf lake-manifest.json .lake/build
rm -rf deps/fooA/.lake/build deps/fooB/.lake/build deps/useA/.lake/build deps/useB/.lake/build rm -rf deps/fooA/.lake/build deps/fooB/.lake/build deps/useA/.lake/build deps/useB/.lake/build

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
# The test covers the behavior of transitively importing multiple modules # The test covers the behavior of transitively importing multiple modules
# that have a definition with the same name. # that have a definition with the same name.
@@ -18,18 +19,19 @@
# Modules `UseBarA` and `UseBarB` use them (privately), and `TestLocalUse` # Modules `UseBarA` and `UseBarB` use them (privately), and `TestLocalUse`
# imports both. # imports both.
source ../../lake/tests/common.sh
./clean.sh ./clean.sh
# Test the behavior when multiple copies of the same definition (`foo`) # Test the behavior when multiple copies of the same definition (`foo`)
# are seen by Lean (e.g., via importing two modules which define them). # are seen by Lean (e.g., via importing two modules which define them).
test_err "environment already contains 'foo'" build TestFoo capture_fail lake build TestFoo
check_out_contains "environment already contains 'foo'"
# Test the behavior when multiple copies of the same definition (`foo`) exist # Test the behavior when multiple copies of the same definition (`foo`) exist
# in different packages but are not visible to any one module. # in different packages but are not visible to any one module.
test_out "fooA; fooB" exe TestUse capture lake exe TestUse
check_out_contains "fooA; fooB"
# Test the behavior when multiple copies of the same definition (`foo`) exist # Test the behavior when multiple copies of the same definition (`foo`) exist
# in the same package but are not visible to any one module. # in the same package but are not visible to any one module.
test_err "lp_test_bar" build TestLocalUse capture_fail lake build TestLocalUse
check_out_contains "lp_test_bar"

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake build lake build

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake build lake build
@@ -12,20 +13,20 @@ lake build
# implemented in the same project into a goal state from the file. # implemented in the same project into a goal state from the file.
# This already worked prior to lean4#2423. # This already worked prior to lean4#2423.
lake exe frontend_with_import2 Frontend.Import1 && lake exe frontend_with_import2 Frontend.Import1
# Check that if we don't import `Frontend.Import2`, we can successfully run # Check that if we don't import `Frontend.Import2`, we can successfully run
# #eval main ["Frontend.Import1"] # #eval main ["Frontend.Import1"]
# in the interpreter # in the interpreter
# This already worked prior to lean4#2423. # This already worked prior to lean4#2423.
lake build Frontend.Main_with_eval && lake build Frontend.Main_with_eval
# However if `Main` has imported `Frontend.Import2`, then # However if `Main` has imported `Frontend.Import2`, then
# #eval main ["Frontend.Import1"] # #eval main ["Frontend.Import1"]
# fails to compile `Frontend.Import1` in the interpreter # fails to compile `Frontend.Import1` in the interpreter
# prior to lean4#2423. # prior to lean4#2423.
lake build Frontend.Main_with_Import2_and_eval && lake build Frontend.Main_with_Import2_and_eval
# Compiling multiple files with a shared import with an initializer # Compiling multiple files with a shared import with an initializer
# failed prior to lean4#2423: # failed prior to lean4#2423:

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake build lake build

26
tests/pkg/leanchecker/run_test Executable file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
lake build
for f in LeanCheckerTests/*.lean; do
module="LeanCheckerTests.$(basename "$f" .lean)"
# Check for --fresh mode test
if [[ -f "$f.fresh" ]]; then
capture_only "$f" \
lake env leanchecker --fresh "$module"
check_exit_is_fail
check_out_file
# Check for normal mode test
elif [[ -f "$f.out.expected" ]]; then
# Expect failure with specific output
capture_only "$f" \
lake env leanchecker "$module"
check_exit_is_fail
check_out_file
else
# No expected output files - expect success (exit 0)
run lake env leanchecker "$module"
fi
done

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
rm -rf .lake/build
lake build
for f in LeanCheckerTests/*.lean; do
# It would be nicer if `common.sh` did not hardcode a single input file
set -- "$f"
source ../../common.sh
module="LeanCheckerTests.$(basename "$f" .lean)"
# Check for --fresh mode test
if [ -f "$f.fresh.expected.out" ]; then
# Test --fresh mode (expect failure)
expected_ret=1
exec_check lake env leanchecker --fresh "$module"
# Use fresh expected output for comparison
mv "$f.produced.out" "$f.fresh.produced.out"
f_save="$f"
f="$f.fresh"
diff_produced
f="$f_save"
# Check for normal mode test
elif [ -f "$f.expected.out" ]; then
# Expect failure with specific output
expected_ret=1
exec_check lake env leanchecker "$module"
diff_produced
else
# No expected output files - expect success (exit 0)
expected_ret=0
exec_check_raw lake env leanchecker "$module"
fi
done

5
tests/pkg/linter_set/run_test Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
lake build

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
rm -rf .lake/build
lake build

5
tests/pkg/misc/run_test Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
lake build

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
rm -rf .lake/build
lake build

View File

@@ -1,17 +1,18 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../lake/tests/common.sh source ../../env_test.sh
# This test covers importing modules which are defined in multiple packages # This test covers importing modules which are defined in multiple packages
# (with the same original package name). # (with the same original package name).
./clean.sh ./clean.sh
test_run resolve-deps run lake resolve-deps
# Test that importing a module with multiple identical candidates works # Test that importing a module with multiple identical candidates works
test_run build Test.ImportSame run lake build Test.ImportSame
# Test that importing a module with multiple sufficiently similar candidates works # Test that importing a module with multiple sufficiently similar candidates works
test_run build Test.ImportSimilar run lake build Test.ImportSimilar
# Test that importing a module with multiple distinct candidates fails # Test that importing a module with multiple distinct candidates fails
test_err 'could not disambiguate the module `Diff`' build Test.ImportDiff capture_fail lake build Test.ImportDiff
check_out_contains 'could not disambiguate the module `Diff`'

8
tests/pkg/module/run_test Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
LEAN_ABORT_ON_PANIC=1 lake build
capture_fail lake lean Module/ConflictingImported.lean
check_out_contains "already contains 'f'"

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bash
set -e
rm -rf .lake/build
LEAN_ABORT_ON_PANIC=1 lake build
lake lean Module/ConflictingImported.lean | grep "already contains 'f'"

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake exe "path with spaces" lake exe "path with spaces"

5
tests/pkg/prv/run_test Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
lake build

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
rm -rf .lake/build
lake build

View File

@@ -1,13 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -euxo pipefail source ../../env_test.sh
source ../../lake/tests/common.sh
rm -rf .lake/build rm -rf .lake/build
mkdir -p Rebuild mkdir -p Rebuild
cat <<EOF > Rebuild/Basic.lean cat <<EOF > Rebuild/Basic.lean
-- File autocreated by test.sh -- File autocreated by run_test
module module
@@ -43,7 +41,7 @@ echo "-- test" >> Rebuild/Basic.lean
test_unchanged test_unchanged
# Closed terms and doc comments do not matter. # Closed terms and doc comments do not matter.
sed_i 's/world/wodd/' Rebuild/Basic.lean perl -p -i -e 's/world/wodd/' Rebuild/Basic.lean
test_unchanged test_unchanged
# Private declarations do not matter. # Private declarations do not matter.
@@ -51,7 +49,7 @@ echo 'theorem priv : True := .intro' >> Rebuild/Basic.lean
test_unchanged test_unchanged
# Lambdas do not matter. # Lambdas do not matter.
sed_i 's/"wodd"/dbg_trace "typo"; "wodd"/' Rebuild/Basic.lean perl -p -i -e 's/"wodd"/dbg_trace "typo"; "wodd"/' Rebuild/Basic.lean
test_unchanged test_unchanged
# Private definitions do not matter. # Private definitions do not matter.
@@ -59,7 +57,7 @@ echo 'def privd : Nat := 0' >> Rebuild/Basic.lean
test_unchanged test_unchanged
# Specializations do not matter. # Specializations do not matter.
sed_i 's/x + 1/x + 2/' Rebuild/Basic.lean perl -p -i -e 's/x + 1/x + 2/' Rebuild/Basic.lean
test_unchanged test_unchanged
# private `match`es do not matter. # private `match`es do not matter.

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -euo pipefail source ../../env_test.sh
# Test that Lean will use the specified olean from `setup.json` # Test that Lean will use the specified olean from `setup.json`
lean Dep.lean -o Dep.olean lean Dep.lean -o Dep.olean

View File

@@ -1,10 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake build release lake build release
# Create a named pipe for communication # Create a named pipe for communication
PIPE=$(mktemp -u) PIPE="$TMP_DIR/pipe"
mkfifo "$PIPE" mkfifo "$PIPE"
# Run release in the background, redirect stdout to the pipe # Run release in the background, redirect stdout to the pipe
@@ -36,13 +37,10 @@ echo "Started process with PID: $PID"
fi fi
} }
# Clean up the pipe
rm -f "$PIPE"
# Wait for process to finish # Wait for process to finish
echo "Waiting for process $PID to finish..." echo "Waiting for process $PID to finish..."
if wait "$PID"; then if wait "$PID"; then
echo "Process completed successfully" echo "Process completed successfully"
else else
echo "Process exited with code $?" echo "Process exited with code $?"
fi fi

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
set -e source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
LEAN_ABORT_ON_PANIC=1 lake build LEAN_ABORT_ON_PANIC=1 lake build

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
exit 0 # TODO: flaky test disabled
# We need a package test because we need multiple files with imports. # We need a package test because we need multiple files with imports.
# Currently the other package tests all succeed, # Currently the other package tests all succeed,
@@ -12,6 +11,8 @@ exit 0 # TODO: flaky test disabled
rm -rf .lake/build rm -rf .lake/build
# TODO Use and/or emulate the helper functions from util.sh?
# Function to process the output # Function to process the output
verify_output() { verify_output() {
awk '/error: .*lean:/, /error: Lean exited/' | awk '/error: .*lean:/, /error: Lean exited/' |

5
tests/pkg/user_attr/run_test Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
lake build

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
rm -rf .lake/build
lake build

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build rm -rf .lake/build
lake exe user_attr lake exe user_attr

7
tests/pkg/user_ext/run_test Executable file
View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
capture lake build -v
check_out_contains 'hello, test, world'

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bash
exit 0 # TODO: test started being nondeterministic
rm -rf .lake/build
lake build -v 2>&1 | grep 'hello, test, world'

5
tests/pkg/user_opt/run_test Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
source ../../env_test.sh
rm -rf .lake/build
lake build

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env bash
rm -rf .lake/build
lake build

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../env_test.sh
set -euo pipefail set -euo pipefail
# Deermine shared library extension # Deermine shared library extension
@@ -51,8 +52,7 @@ lake env lean --run testEnv.lean $ENV_PLUGIN 2>&1 | diff <(echo "$ENV_EXPECTED_O
# Test failure to load environment plugin without `withImporting` # Test failure to load environment plugin without `withImporting`
lean --run test.lean $ENV_PLUGIN >/dev/null 2>&1 && { lean --run test.lean $ENV_PLUGIN >/dev/null 2>&1 && {
echo "Loading environment plugin without importing succeeded unexpectedly." fail "Loading environment plugin without importing succeeded unexpectedly."
exit 1
} || true } || true
# Print success # Print success

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../../lake/tests/common.sh source ../../env_test.sh
./clean.sh
# This directory contains a unified version of the "ring example" # This directory contains a unified version of the "ring example"
# developed by Kim across the following 4 repositories: # developed by Kim across the following 4 repositories:
@@ -25,9 +24,32 @@ source ../../lake/tests/common.sh
# Since committing a Git repository to a Git repository is not well-supported, # Since committing a Git repository to a Git repository is not well-supported,
# We reinitialize the repositories on each test. # We reinitialize the repositories on each test.
# Stolen from the lake test suite
UNAME="$(uname)"
if [ "$UNAME" = Darwin ] || [ "$UNAME" = FreeBSD ]; then
sed_i() { sed -i '' "$@"; }
else
sed_i() { sed -i "$@"; }
fi
# Stolen from the lake test suite
init_git() {
echo "# initialize test repository"
set -x
git init
git checkout -b master
git config user.name test
git config user.email test@example.com
git add --all
git commit -m "initial commit"
set +x
}
./clean.sh
pushd DiamondExample-A pushd DiamondExample-A
sed_i s/add_left_comm/poorly_named_lemma/ DiamondExampleA/Ring/Lemmas.lean sed_i s/add_left_comm/poorly_named_lemma/ DiamondExampleA/Ring/Lemmas.lean
$LAKE update lake update
init_git init_git
git tag v1 git tag v1
sed_i s/poorly_named_lemma/add_left_comm/ DiamondExampleA/Ring/Lemmas.lean sed_i s/poorly_named_lemma/add_left_comm/ DiamondExampleA/Ring/Lemmas.lean
@@ -36,30 +58,30 @@ git tag v2
popd popd
pushd DiamondExample-B pushd DiamondExample-B
$LAKE update lake update
init_git init_git
popd popd
pushd DiamondExample-C pushd DiamondExample-C
sed_i s/v2/v1/ lakefile.toml sed_i s/v2/v1/ lakefile.toml
sed_i s/add_left_comm/poorly_named_lemma/ DiamondExampleC/MainResult.lean sed_i s/add_left_comm/poorly_named_lemma/ DiamondExampleC/MainResult.lean
$LAKE update lake update
init_git init_git
git tag v1 git tag v1
sed_i s/v1/v2/ lakefile.toml sed_i s/v1/v2/ lakefile.toml
sed_i s/poorly_named_lemma/add_left_comm/ DiamondExampleC/MainResult.lean sed_i s/poorly_named_lemma/add_left_comm/ DiamondExampleC/MainResult.lean
$LAKE update lake update
git commit -am "use v2" git commit -am "use v2"
git tag v2 git tag v2
popd popd
pushd DiamondExample-D pushd DiamondExample-D
sed_i s/v2/v1/ lakefile.toml sed_i s/v2/v1/ lakefile.toml
$LAKE update lake update
init_git init_git
git tag v1 git tag v1
sed_i s/v1/v2/ lakefile.toml sed_i s/v1/v2/ lakefile.toml
$LAKE update lake update
git commit -am "use v2" git commit -am "use v2"
git tag v2 git tag v2
popd popd
@@ -72,20 +94,23 @@ pushd DiamondExample-D
# Test build succeeds on v1 # Test build succeeds on v1
git switch v1 --detach git switch v1 --detach
test_run build run lake build
# Test build fails on v2 # Test build fails on v2
git switch v2 --detach git switch v2 --detach
test_err 'Unknown identifier `poorly_named_lemma`' build capture_fail lake build
check_out_contains 'Unknown identifier `poorly_named_lemma`'
# Test build with different package names # Test build with different package names
sed_i '/name/ s/A/A-v1/' .lake/packages/DiamondExample-B/lakefile.toml sed_i '/name/ s/A/A-v1/' .lake/packages/DiamondExample-B/lakefile.toml
sed_i '/name/ s/A/A-v2/' .lake/packages/DiamondExample-C/lakefile.toml sed_i '/name/ s/A/A-v2/' .lake/packages/DiamondExample-C/lakefile.toml
test_run update run lake update
test_err 'could not disambiguate the module `DiamondExampleA.Ring.Lemmas`' build
capture_fail lake build
check_out_contains 'could not disambiguate the module `DiamondExampleA.Ring.Lemmas`'
popd popd
# Cleanup # Cleanup
rm -rf DiamondExample-*/.git rm -rf DiamondExample-*/.git
rm -f produced.out rm -rf DiamondExample-*/.lake

View File

@@ -1,13 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
run_before "$1" run_before "$1"
exec_capture "$1" \ capture_only "$1" \
lean -Dlinter.all=false --run "$1" lean -Dlinter.all=false --run "$1"
check_exit_is_success
check_exit "$1" check_out_file
check_out "$1"
run_after "$1" run_after "$1"

View File

@@ -1,16 +1,14 @@
#!/usr/bin/env bash #!/usr/bin/env bash
source ../env_test.sh source ../env_test.sh
source "$TEST_DIR/util.sh"
# IO.Process.exit (used by the file worker) seems to be incompatible with LSAN # IO.Process.exit (used by the file worker) seems to be incompatible with LSAN
# TODO: investigate or work around # TODO: investigate or work around
export ASAN_OPTIONS=detect_leaks=0 export ASAN_OPTIONS=detect_leaks=0
# these tests don't have to succeed # these tests don't have to succeed
exec_capture "$1" \ capture_only "$1" \
lean -Dlinter.all=false --run run_test.lean "$1" lean -Dlinter.all=false --run run_test.lean "$1"
check_exit_is_success
normalize_mvar_suffixes "$1" normalize_mvar_suffixes
normalize_reference_urls "$1" normalize_reference_urls
check_exit "$1" check_out_file
check_out "$1"

View File

@@ -1,19 +1,121 @@
set -e set -e
DIFF="diff -au --strip-trailing-cr --color=always"
ulimit -S -s ${TEST_STACK_SIZE:-8192} ulimit -S -s ${TEST_STACK_SIZE:-8192}
DIFF="diff -au --strip-trailing-cr --color=always"
# Temporary directory to store unnamed output files
TMP_DIR="$(mktemp --tmpdir --directory lean4-test-suite.XXXXXXXXXX)"
trap 'rm -rf "$TMP_DIR"' EXIT
function fail { function fail {
echo "$1" echo "TEST FAILED: $1"
exit 1 exit 1
} }
function fail_if_success { function check_exit_is_success {
if "$@"; then if [[ $EXIT != 0 ]]; then
fail "unexpected success: $*" fail "Expected command to succeed, got exit code $EXIT"
fi fi
} }
function check_exit_is_fail {
if [[ $EXIT == 0 ]]; then
fail "Expected command to fail, got exit code $EXIT"
fi
}
function check_exit_is {
if [[ $1 == "nonzero" ]]; then
check_exit_is_fail
elif [[ $EXIT != $1 ]]; then
fail "Expected exit code $1, got exit code $EXIT"
fi
}
function check_out_contains {
if ! grep -Fq "$1" "$CAPTURED.out.produced"; then
fail "Expected output to contain '$1'"
fi
}
function check_out_matches {
if ! grep -Eq "$1" "$CAPTURED.out.produced"; then
fail "Expected output to match regex '$1'"
fi
}
function check_out_file {
if [[ -f "$CAPTURED.out.ignored" ]]; then
echo "Output ignored, skipping check"
elif [[ -f "$CAPTURED.out.expected" ]]; then
$DIFF -- "$CAPTURED.out.expected" "$CAPTURED.out.produced" || fail "Unexpected output"
else
echo -n | $DIFF -- - "$CAPTURED.out.produced" || fail "Unexpected output"
fi
}
# Run a command.
# Sets $EXIT to the exit code.
function run_only {
EXIT=0; (set -x; "${@}") || EXIT="$?"
}
# Run a command, failing if the command fails.
# Sets $EXIT to the exit code.
function run {
run_only "${@}"
check_exit_is_success
}
# Run a command, failing if the command succeeds.
# Sets $EXIT to the exit code.
function run_fail {
run_only "${@}"
check_exit_is_fail
}
# Run a command, failing if the exit code does not match the expected value.
# Sets $EXIT to the exit code.
function run_exit {
EXPECTED="$1"; shift
run_only "${@}"
check_exit_is "$EXPECTED"
}
# Run a command, capturing its output.
# The first argument specifies the output file path.
# If it is empty, a temporary file is used.
# Sets $EXIT to the exit code and $CAPTURED to the output file path.
function capture_only {
# No need to mktemp the file to prevent collisions during parallel test runs
# since $TMP_DIR is already specific to this test run.
CAPTURED="${1:-"$TMP_DIR/tmp"}"; shift
EXIT=0; (set -x; "${@}" > "$CAPTURED.out.produced" 2>&1) || EXIT="$?"
}
# Run a command, capturing its output and failing if the command fails.
# Sets $EXIT to the exit code and $CAPTURED to the output file path.
function capture {
capture_only "" "${@}"
check_exit_is_success
}
# Run a command, capturing its output and failing if the command succeeds.
# Sets $EXIT to the exit code and $CAPTURED to the output file path.
function capture_fail {
capture_only "" "${@}"
check_exit_is_fail
}
# Run a command, capturing its output and failing if the command does not match the expected exit code.
# Sets $EXIT to the exit code and $CAPTURED to the output file path.
function capture_exit {
EXPECTED="$1"; shift
capture_only "" "${@}"
check_exit_is "$EXPECTED"
}
function source_init { function source_init {
if [[ -f "$1.init.sh" ]]; then if [[ -f "$1.init.sh" ]]; then
source "$1.init.sh" source "$1.init.sh"
@@ -32,63 +134,28 @@ function run_after {
fi fi
} }
function exec_capture {
# backtraces are system-specific, strip them (might be captured in `#guard_msgs`)
ERROR=0
LEAN_BACKTRACE=0 "${@:2}" > "$1.out.produced" 2>&1 || ERROR="$?"
echo "$ERROR" > "$1.exit.produced"
}
function check_exit {
if [[ -f "$1.exit.expected" ]]; then
EXPECTED="$(< "$1.exit.expected")"
else
EXPECTED="${2:-0}"
fi
ACTUAL="$(< "$1.exit.produced")"
if [[ "$EXPECTED" == "nonzero" ]]; then
if [[ "$ACTUAL" == "0" ]]; then
fail "$1: Expected nonzero exit code, got 0"
fi
elif [[ "$EXPECTED" != "$ACTUAL" ]]; then
fail "$1: Expected exit code $EXPECTED, got $ACTUAL"
fi
}
function check_out {
if [[ -f "$1.out.ignored" ]]; then
echo "Output ignored, skipping check"
elif [[ -f "$1.out.expected" ]]; then
$DIFF -- "$1.out.expected" "$1.out.produced" || fail "$1: Unexpected output"
else
echo -n | $DIFF -- - "$1.out.produced" || fail "$1: Unexpected output"
fi
}
# mvar suffixes like in `?m.123` are deterministic but prone to change on minor changes, so strip them # mvar suffixes like in `?m.123` are deterministic but prone to change on minor changes, so strip them
function normalize_mvar_suffixes { function normalize_mvar_suffixes {
# Sed on macOS does not support \w. # Sed on macOS does not support \w.
perl -p -i -e 's/(\?(\w|_\w+))\.[0-9]+/\1/g' "$1.out.produced" perl -p -i -e 's/(\?(\w|_\w+))\.[0-9]+/\1/g' "$CAPTURED.out.produced"
} }
# similarly, links to the language reference may have URL components depending on the toolchain, so normalize those # similarly, links to the language reference may have URL components depending on the toolchain, so normalize those
function normalize_reference_urls { function normalize_reference_urls {
perl -p -i -e 's#https://lean-lang\.org/doc/reference/(v?[0-9.]+(-rc[0-9]+)?|latest)#REFERENCE#g' "$1.out.produced" perl -p -i -e 's#https://lean-lang\.org/doc/reference/(v?[0-9.]+(-rc[0-9]+)?|latest)#REFERENCE#g' "$CAPTURED.out.produced"
} }
function normalize_measurements { function normalize_measurements {
# Sed on macOS does not support \S. # Sed on macOS does not support \S.
perl -p -i -e 's/^measurement: (\S+) \S+( \S+)?$/measurement: \1 .../' "$1.out.produced" perl -p -i -e 's/^measurement: (\S+) \S+( \S+)?$/measurement: \1 .../' "$CAPTURED.out.produced"
} }
function extract_measurements { function extract_measurements {
grep -E '^measurement: \S+ \S+( \S+)?$' "$1.out.produced" \ grep -E '^measurement: \S+ \S+( \S+)?$' "$CAPTURED.out.produced" \
| jq -R --arg topic "$2" 'split(" ") | { metric: "\($topic)//\(.[1])", value: .[2]|tonumber, unit: .[3] }' -c \ | jq -R --arg topic "$1" 'split(" ") | { metric: "\($topic)//\(.[1])", value: .[2]|tonumber, unit: .[3] }' -c \
>> "$1.measurements.jsonl" >> "$CAPTURED.measurements.jsonl"
normalize_measurements "$1" normalize_measurements "$CAPTURED"
} }
function set_stack_size_to_maximum { function set_stack_size_to_maximum {