Commit 93037bab by Edward Thomson

tests: add benchmark tests

Add a benchmark test suite that wraps hyperfine and is suitable for
producing data about test runs of a CLI or A/B testing CLIs.
parent e48bb3b7
......@@ -2,6 +2,8 @@
These are the unit and integration tests for the libgit2 projects.
* `benchmarks`
These are benchmark tests that excercise the CLI.
* `clar`
This is [clar](https://github.com/clar-test/clar) the common test framework.
* `headertest`
......
# libgit2 benchmarks
This folder contains the individual benchmark tests for libgit2,
meant for understanding the performance characteristics of libgit2,
comparing your development code to the existing libgit2 code, or
comparing libgit2 to the git reference implementation.
## Running benchmark tests
Benchmark tests can be run in several different ways: running all
benchmarks, running one (or more) suite of benchmarks, or running a
single individual benchmark. You can target either an individual
version of a CLI, or you can A/B test a baseline CLI against a test
CLI.
### Specifying the command-line interface to test
By default, the `git` in your path is benchmarked. Use the
`-c` (or `--cli`) option to specify the command-line interface
to test.
Example: `libgit2_bench --cli git2_cli` will run the tests against
`git2_cli`.
### Running tests to compare two different implementations
You can compare a baseline command-line interface against a test
command-line interface using the `-b (or `--baseline-cli`) option.
Example: `libgit2_bench --baseline-cli git --cli git2_cli` will
run the tests against both `git` and `git2_cli`.
### Running individual benchmark tests
Similar to how a test suite or individual test is specified in
[clar](https://github.com/clar-test/clar), the `-s` (or `--suite`)
option may be used to specify the suite or individual test to run.
Like clar, the suite and test name are separated by `::`, and like
clar, this is a prefix match.
Examples:
* `libgit2_bench -shash_object` will run the tests in the
`hash_object` suite.
* `libgit2_bench -shash_object::random_1kb` will run the
`hash_object::random_1kb` test.
* `libgit2_bench -shash_object::random` will run all the tests that
begin with `hash_object::random`.
## Writing benchmark tests
Benchmark tests are meant to be easy to write. Each individual
benchmark is a shell script that allows it to do set up (eg, creating
or cloning a repository, creating temporary files, etc), then running
benchmarks, then teardown.
The `benchmark_helpers.sh` script provides many helpful utility
functions to allow for cross-platform benchmarking, as well as a
wrapper for `hyperfine` that is suited to testing libgit2.
Note that the helper script must be included first, at the beginning
of the benchmark test.
### Benchmark example
This simplistic example compares the speed of running the `git help`
command in the baseline CLI to the test CLI.
```bash
#!/bin/bash -e
# include the benchmark library
. "$(dirname "$0")/benchmark_helpers.sh"
# run the "help" command; this will benchmark `git2_cli help`
gitbench help
```
### Naming
The filename of the benchmark itself is important. A benchmark's
filename should be the name of the benchmark suite, followed by two
underscores, followed by the name of the benchmark. For example,
`hash-object__random_1kb` is the `random_1kb` test in the `hash-object`
suite.
### Options
The `gitbench` function accepts several options.
* `--sandbox <path>`
The name of a test resource (in the `tests/resources` directory).
This will be copied as-is to the sandbox location before test
execution. This is copied _before_ the `prepare` script is run.
This option may be specified multiple times.
* `--repository <path>`
The name of a test resource repository (in the `tests/resources`
directory). This repository will be copied into a sandbox location
before test execution, and your test will run in this directory.
This is copied _before_ the `prepare` script is run.
* `--prepare <script>`
A script to run before each invocation of the test is run. This can
set up data for the test that will _not_ be timed. This script is run
in bash on all platforms.
Several helper functions are available within the context of a prepare
script:
* `flush_disk_cache`
Calling this will flush the disk cache before each test run.
This should probably be run at the end of the `prepare` script.
* `create_random_file <path> [<size>]`
Calling this will populate a file at the given `path` with `size`
bytes of random data.
* `create_text_file <path> [<size>]`
Calling this will populate a file at the given `path` with `size`
bytes of predictable text, with the platform line endings. This
is preferred over random data as it's reproducible.
* `--warmup <n>`
Specifies that the test should run `n` times before actually measuring
the timing; useful for "warming up" a cache.
#!/bin/bash
set -eo pipefail
#
# parse the command line
#
usage() { echo "usage: $(basename "$0") [--cli <path>] [--baseline-cli <path>] [--suite <suite>] [--json <path>] [--zip <path>] [--verbose] [--debug]"; }
TEST_CLI="git"
BASELINE_CLI=
SUITE=
JSON_RESULT=
ZIP_RESULT=
OUTPUT_DIR=
VERBOSE=
DEBUG=
NEXT=
for a in "$@"; do
if [ "${NEXT}" = "cli" ]; then
TEST_CLI="${a}"
NEXT=
elif [ "${NEXT}" = "baseline-cli" ]; then
BASELINE_CLI="${a}"
NEXT=
elif [ "${NEXT}" = "suite" ]; then
SUITE="${a}"
NEXT=
elif [ "${NEXT}" = "json" ]; then
JSON_RESULT="${a}"
NEXT=
elif [ "${NEXT}" = "zip" ]; then
ZIP_RESULT="${a}"
NEXT=
elif [ "${NEXT}" = "output-dir" ]; then
OUTPUT_DIR="${a}"
NEXT=
elif [ "${a}" = "c" ] || [ "${a}" = "--cli" ]; then
NEXT="cli"
elif [[ "${a}" == "-c"* ]]; then
TEST_CLI="${a/-c/}"
elif [ "${a}" = "b" ] || [ "${a}" = "--baseline-cli" ]; then
NEXT="baseline-cli"
elif [[ "${a}" == "-b"* ]]; then
BASELINE_CLI="${a/-b/}"
elif [ "${a}" = "-s" ] || [ "${a}" = "--suite" ]; then
NEXT="suite"
elif [[ "${a}" == "-s"* ]]; then
SUITE="${a/-s/}"
elif [ "${a}" = "-v" ] || [ "${a}" == "--verbose" ]; then
VERBOSE=1
elif [ "${a}" == "--debug" ]; then
VERBOSE=1
DEBUG=1
elif [ "${a}" = "-j" ] || [ "${a}" == "--json" ]; then
NEXT="json"
elif [[ "${a}" == "-j"* ]]; then
JSON_RESULT="${a/-j/}"
elif [ "${a}" = "-z" ] || [ "${a}" == "--zip" ]; then
NEXT="zip"
elif [[ "${a}" == "-z"* ]]; then
ZIP_RESULT="${a/-z/}"
elif [ "${a}" = "--output-dir" ]; then
NEXT="output-dir"
else
echo "$(basename "$0"): unknown option: ${a}" 1>&2
usage 1>&2
exit 1
fi
done
if [ "${NEXT}" != "" ]; then
usage 1>&2
exit 1
fi
if [ "${OUTPUT_DIR}" = "" ]; then
OUTPUT_DIR=${OUTPUT_DIR:="$(mktemp -d)"}
CLEANUP_DIR=1
fi
#
# collect some information about the test environment
#
SYSTEM_OS=$(uname -s)
if [ "${SYSTEM_OS}" = "Darwin" ]; then SYSTEM_OS="macOS"; fi
SYSTEM_KERNEL=$(uname -v)
fullpath() {
if [[ "$(uname -s)" == "MINGW"* && $(cygpath -u "${TEST_CLI}") == "/"* ]]; then
echo "${TEST_CLI}"
elif [[ "${TEST_CLI}" == "/"* ]]; then
echo "${TEST_CLI}"
else
which "${TEST_CLI}"
fi
}
cli_version() {
if [[ "$(uname -s)" == "MINGW"* ]]; then
$(cygpath -u "$1") --version
else
"$1" --version
fi
}
TEST_CLI_NAME=$(basename "${TEST_CLI}")
TEST_CLI_PATH=$(fullpath "${TEST_CLI}")
TEST_CLI_VERSION=$(cli_version "${TEST_CLI}")
if [ "${BASELINE_CLI}" != "" ]; then
if [[ "${BASELINE_CLI}" == "/"* ]]; then
BASELINE_CLI_PATH="${BASELINE_CLI}"
else
BASELINE_CLI_PATH=$(which "${BASELINE_CLI}")
fi
BASELINE_CLI_NAME=$(basename "${BASELINE_CLI}")
BASELINE_CLI_PATH=$(fullpath "${BASELINE_CLI}")
BASELINE_CLI_VERSION=$(cli_version "${BASELINE_CLI}")
fi
#
# run the benchmarks
#
echo "##############################################################################"
if [ "${SUITE}" != "" ]; then
SUITE_PREFIX="${SUITE/::/__}"
echo "## Running ${SUITE} benchmarks"
else
echo "## Running all benchmarks"
fi
echo "##############################################################################"
echo ""
if [ "${BASELINE_CLI}" != "" ]; then
echo "# Baseline CLI: ${BASELINE_CLI} (${BASELINE_CLI_VERSION})"
fi
echo "# Test CLI: ${TEST_CLI} (${TEST_CLI_VERSION})"
echo ""
BENCHMARK_DIR=${BENCHMARK_DIR:=$(dirname "$0")}
ANY_FOUND=
ANY_FAILED=
indent() { sed "s/^/ /"; }
time_in_ms() { if [ "$(uname -s)" = "Darwin" ]; then date "+%s000"; else date "+%s%N" ; fi; }
humanize_secs() {
units=('s' 'ms' 'us' 'ns')
unit=0
time="${1}"
if [ "${time}" = "" ]; then
echo ""
return
fi
# bash doesn't do floating point arithmetic. ick.
while [[ "${time}" == "0."* ]] && [ "$((unit+1))" != "${#units[*]}" ]; do
time="$(echo | awk "{ print ${time} * 1000 }")"
unit=$((unit+1))
done
echo "${time} ${units[$unit]}"
}
TIME_START=$(time_in_ms)
for TEST_PATH in "${BENCHMARK_DIR}"/*; do
TEST_FILE=$(basename "${TEST_PATH}")
if [ ! -f "${TEST_PATH}" ] || [ ! -x "${TEST_PATH}" ]; then
continue
fi
if [[ "${TEST_FILE}" != *"__"* ]]; then
continue
fi
if [[ "${TEST_FILE}" != "${SUITE_PREFIX}"* ]]; then
continue
fi
ANY_FOUND=1
TEST_NAME="${TEST_FILE/__/::}"
echo -n "${TEST_NAME}:"
if [ "${VERBOSE}" = "1" ]; then
echo ""
else
echo -n " "
fi
if [ "${DEBUG}" = "1" ]; then
SHOW_OUTPUT="--show-output"
fi
OUTPUT_FILE="${OUTPUT_DIR}/${TEST_FILE}.out"
JSON_FILE="${OUTPUT_DIR}/${TEST_FILE}.json"
ERROR_FILE="${OUTPUT_DIR}/${TEST_FILE}.err"
FAILED=
${TEST_PATH} --cli "${TEST_CLI}" --baseline-cli "${BASELINE_CLI}" --json "${JSON_FILE}" ${SHOW_OUTPUT} >"${OUTPUT_FILE}" 2>"${ERROR_FILE}" || FAILED=1
if [ "${FAILED}" = "1" ]; then
if [ "${VERBOSE}" != "1" ]; then
echo "failed!"
fi
indent < "${ERROR_FILE}"
ANY_FAILED=1
continue
fi
# in verbose mode, just print the hyperfine results; otherwise,
# pull the useful information out of its json and summarize it
if [ "${VERBOSE}" = "1" ]; then
indent < "${OUTPUT_FILE}"
else
jq -r '[ .results[0].mean, .results[0].stddev, .results[1].mean, .results[1].stddev ] | @tsv' < "${JSON_FILE}" | while IFS=$'\t' read -r one_mean one_stddev two_mean two_stddev; do
one_mean=$(humanize_secs "${one_mean}")
one_stddev=$(humanize_secs "${one_stddev}")
if [ "${two_mean}" != "" ]; then
two_mean=$(humanize_secs "${two_mean}")
two_stddev=$(humanize_secs "${two_stddev}")
echo "${one_mean} ± ${one_stddev} vs ${two_mean} ± ${two_stddev}"
else
echo "${one_mean} ± ${one_stddev}"
fi
done
fi
# add our metadata to the hyperfine json result
jq ". |= { \"name\": \"${TEST_NAME}\" } + ." < "${JSON_FILE}" > "${JSON_FILE}.new" && mv "${JSON_FILE}.new" "${JSON_FILE}"
done
TIME_END=$(time_in_ms)
if [ "$ANY_FOUND" != "1" ]; then
echo ""
echo "error: no benchmark suite \"${SUITE}\"."
echo ""
exit 1
fi
escape() {
echo "${1//\\/\\\\}"
}
# combine all the individual benchmark results into a single json file
if [ "${JSON_RESULT}" != "" ]; then
if [ "${VERBOSE}" = "1" ]; then
echo ""
echo "# Writing JSON results: ${JSON_RESULT}"
fi
SYSTEM_JSON="{ \"os\": \"${SYSTEM_OS}\", \"kernel\": \"${SYSTEM_KERNEL}\" }"
TIME_JSON="{ \"start\": ${TIME_START}, \"end\": ${TIME_END} }"
TEST_CLI_JSON="{ \"name\": \"${TEST_CLI_NAME}\", \"path\": \"$(escape "${TEST_CLI_PATH}")\", \"version\": \"${TEST_CLI_VERSION}\" }"
BASELINE_CLI_JSON="{ \"name\": \"${BASELINE_CLI_NAME}\", \"path\": \"$(escape "${BASELINE_CLI_PATH}")\", \"version\": \"${BASELINE_CLI_VERSION}\" }"
if [ "${BASELINE_CLI}" != "" ]; then
EXECUTOR_JSON="{ \"baseline\": ${BASELINE_CLI_JSON}, \"cli\": ${TEST_CLI_JSON} }"
else
EXECUTOR_JSON="{ \"cli\": ${TEST_CLI_JSON} }"
fi
# add our metadata to all the test results
jq -n "{ \"system\": ${SYSTEM_JSON}, \"time\": ${TIME_JSON}, \"executor\": ${EXECUTOR_JSON}, \"tests\": [inputs] }" "${OUTPUT_DIR}"/*.json > "${JSON_RESULT}"
fi
# combine all the data into a zip if requested
if [ "${ZIP_RESULT}" != "" ]; then
if [ "${VERBOSE}" = "1" ]; then
if [ "${JSON_RESULT}" = "" ]; then echo ""; fi
echo "# Writing ZIP results: ${ZIP_RESULT}"
fi
zip -jr "${ZIP_RESULT}" "${OUTPUT_DIR}" >/dev/null
fi
if [ "$CLEANUP_DIR" = "1" ]; then
rm -f "${OUTPUT_DIR}"/*.out
rm -f "${OUTPUT_DIR}"/*.err
rm -f "${OUTPUT_DIR}"/*.json
rmdir "${OUTPUT_DIR}"
fi
if [ "$ANY_FAILED" = "1" ]; then
exit 1
fi
# variables that benchmark tests can set
#
set -eo pipefail
#
# command-line parsing
#
usage() { echo "usage: $(basename "$0") [--cli <path>] [--baseline-cli <path>] [--output-style <style>] [--json <path>]"; }
NEXT=
BASELINE_CLI=
TEST_CLI="git"
JSON=
SHOW_OUTPUT=
if [ "$CI" != "" ]; then
OUTPUT_STYLE="color"
else
OUTPUT_STYLE="auto"
fi
#
# parse the arguments to the outer script that's including us; these are arguments that
# the `benchmark.sh` passes (or that a user could specify when running an individual test)
#
for a in "$@"; do
if [ "${NEXT}" = "cli" ]; then
TEST_CLI="${a}"
NEXT=
elif [ "${NEXT}" = "baseline-cli" ]; then
BASELINE_CLI="${a}"
NEXT=
elif [ "${NEXT}" = "output-style" ]; then
OUTPUT_STYLE="${a}"
NEXT=
elif [ "${NEXT}" = "json" ]; then
JSON="${a}"
NEXT=
elif [ "${a}" = "-c" ] || [ "${a}" = "--cli" ]; then
NEXT="cli"
elif [[ "${a}" == "-c"* ]]; then
TEST_CLI="${a/-c/}"
elif [ "${a}" = "-b" ] || [ "${a}" = "--baseline-cli" ]; then
NEXT="baseline-cli"
elif [[ "${a}" == "-b"* ]]; then
BASELINE_CLI="${a/-b/}"
elif [ "${a}" == "--output-style" ]; then
NEXT="output-style"
elif [ "${a}" = "-j" ] || [ "${a}" = "--json" ]; then
NEXT="json"
elif [[ "${a}" == "-j"* ]]; then
JSON="${a}"
elif [ "${a}" = "--show-output" ]; then
SHOW_OUTPUT=1
OUTPUT_STYLE=
else
echo "$(basename "$0"): unknown option: ${a}" 1>&2
usage 1>&2
exit 1
fi
done
if [ "${NEXT}" != "" ]; then
echo "$(basename "$0"): option requires a value: --${NEXT}" 1>&2
usage 1>&2
exit 1
fi
fullpath() {
FULLPATH="${1}"
if [[ "$(uname -s)" == "MINGW"* ]]; then FULLPATH="$(cygpath -u "${1}")"; fi
if [[ "${FULLPATH}" != *"/"* ]]; then
FULLPATH="$(which "${FULLPATH}")"
if [ "$?" != "0" ]; then exit 1; fi
else
FULLPATH="$(cd "$(dirname "${FULLPATH}")" && pwd)/$(basename "${FULLPATH}")"
fi
if [[ "$(uname -s)" == "MINGW"* ]]; then FULLPATH="$(cygpath -w "${FULLPATH}")"; fi
echo "${FULLPATH}"
}
resources_dir() {
cd "$(dirname "$0")/../resources" && pwd
}
temp_dir() {
if [ "$(uname -s)" == "Darwin" ]; then
mktemp -dt libgit2_bench
else
mktemp -dt libgit2_bench.XXXXXXX
fi
}
create_preparescript() {
# add some functions for users to use in preparation
cat >> "${SANDBOX_DIR}/prepare.sh" << EOF
set -e
SANDBOX_DIR="${SANDBOX_DIR}"
RESOURCES_DIR="$(resources_dir)"
create_text_file() {
FILENAME="\${1}"
SIZE="\${2}"
if [ "\${FILENAME}" = "" ]; then
echo "usage: create_text_file <name> [size]" 1>&2
exit 1
fi
if [ "\${SIZE}" = "" ]; then
SIZE="1024"
fi
if [[ "\$(uname -s)" == "MINGW"* ]]; then
EOL="\r\n"
EOL_LEN="2"
CONTENTS="This is a reproducible text file. (With Unix line endings.)\n"
CONTENTS_LEN="60"
else
EOL="\n"
EOL_LEN="1"
CONTENTS="This is a reproducible text file. (With DOS line endings.)\r\n"
CONTENTS_LEN="60"
fi
rm -f "\${FILENAME:?}"
touch "\${FILENAME}"
if [ "\${SIZE}" -ge "\$((\${CONTENTS_LEN} + \${EOL_LEN}))" ]; then
SIZE="\$((\${SIZE} - \${CONTENTS_LEN}))"
COUNT="\$(((\${SIZE} - \${EOL_LEN}) / \${CONTENTS_LEN}))"
if [ "\${SIZE}" -gt "\${EOL_LEN}" ]; then
dd if="\${FILENAME}" of="\${FILENAME}" bs="\${CONTENTS_LEN}" seek=1 count="\${COUNT}" 2>/dev/null
fi
SIZE="\$((\${SIZE} - (\${COUNT} * \${CONTENTS_LEN})))"
fi
while [ "\${SIZE}" -gt "\${EOL_LEN}" ]; do
echo -ne "." >> "\${FILENAME}"
SIZE="\$((\${SIZE} - 1))"
done
if [ "\${SIZE}" = "\${EOL_LEN}" ]; then
echo -ne "\${EOL}" >> "\${FILENAME}"
SIZE="\$((\${SIZE} - \${EOL_LEN}))"
else
while [ "\${SIZE}" -gt "0" ]; do
echo -ne "." >> "\${FILENAME}"
SIZE="\$((\${SIZE} - 1))"
done
fi
}
create_random_file() {
FILENAME="\${1}"
SIZE="\${2}"
if [ "\${FILENAME}" = "" ]; then
echo "usage: create_random_file <name> [size]" 1>&2
exit 1
fi
if [ "\${SIZE}" = "" ]; then
SIZE="1024"
fi
dd if="/dev/urandom" of="\${FILENAME}" bs="\${SIZE}" count=1 2>/dev/null
}
flush_disk_cache() {
if [ "\$(uname -s)" = "Darwin" ]; then
sync && sudo purge
elif [ "\$(uname -s)" = "Linux" ]; then
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null
elif [[ "\$(uname -s)" == "MINGW"* ]]; then
PurgeStandbyList
fi
}
sandbox() {
RESOURCE="\${1}"
if [ "\${RESOURCE}" = "" ]; then
echo "usage: sandbox <path>" 1>&2
exit 1
fi
if [ ! -d "\${RESOURCES_DIR}/\${RESOURCE}" ]; then
echo "sandbox: the resource \"\${RESOURCE}\" does not exist"
exit 1
fi
rm -rf "\${SANDBOX_DIR:?}/\${RESOURCE}"
cp -R "\${RESOURCES_DIR}/\${RESOURCE}" "\${SANDBOX_DIR}/"
}
sandbox_repo() {
RESOURCE="\${1}"
sandbox "\${RESOURCE}"
if [ -d "\${SANDBOX_DIR}/\${RESOURCE}/.gitted" ]; then
mv "\${SANDBOX_DIR}/\${RESOURCE}/.gitted" "\${SANDBOX_DIR}/\${RESOURCE}/.git";
fi
if [ -f "\${SANDBOX_DIR}/\${RESOURCE}/gitattributes" ]; then
mv "\${SANDBOX_DIR}/\${RESOURCE}/gitattributes" "\${SANDBOX_DIR}/\${RESOURCE}/.gitattributes";
fi
if [ -f "\${SANDBOX_DIR}/\${RESOURCE}/gitignore" ]; then
mv "\${SANDBOX_DIR}/\${RESOURCE}/gitignore" "\${SANDBOX_DIR}/\${RESOURCE}/.gitignore";
fi
}
cd "\${SANDBOX_DIR}"
EOF
if [ "${PREPARE}" != "" ]; then
echo "" >> "${SANDBOX_DIR}/prepare.sh"
echo "${PREPARE}" >> "${SANDBOX_DIR}/prepare.sh"
fi
echo "${SANDBOX_DIR}/prepare.sh"
}
create_runscript() {
SCRIPT_NAME="${1}"; shift
CLI_PATH="${1}"; shift
if [[ "${CHDIR}" = "/"* ]]; then
START_DIR="${CHDIR}"
elif [ "${CHDIR}" != "" ]; then
START_DIR="${SANDBOX_DIR}/${CHDIR}"
else
START_DIR="${SANDBOX_DIR}"
fi
# our run script starts by chdir'ing to the sandbox or repository directory
echo -n "cd \"${START_DIR}\" && \"${CLI_PATH}\"" >> "${SANDBOX_DIR}/${SCRIPT_NAME}.sh"
for a in "$@"; do
echo -n " \"${a}\"" >> "${SANDBOX_DIR}/${SCRIPT_NAME}.sh"
done
echo "${SANDBOX_DIR}/${SCRIPT_NAME}.sh"
}
gitbench_usage() { echo "usage: gitbench command..."; }
#
# this is the function that the outer script calls to actually do the sandboxing and
# invocation of hyperfine.
#
gitbench() {
NEXT=
# this test should run the given command in preparation of the tests
# this preparation script will be run _after_ repository creation and
# _before_ flushing the disk cache
PREPARE=
# this test should run within the given directory; this is a
# relative path beneath the sandbox directory.
CHDIR=
# this test should run `n` warmups
WARMUP=0
if [ "$*" = "" ]; then
gitbench_usage 1>&2
exit 1
fi
for a in "$@"; do
if [ "${NEXT}" = "warmup" ]; then
WARMUP="${a}"
NEXT=
elif [ "${NEXT}" = "prepare" ]; then
PREPARE="${a}"
NEXT=
elif [ "${NEXT}" = "chdir" ]; then
CHDIR="${a}"
NEXT=
elif [ "${a}" = "--warmup" ]; then
NEXT="warmup"
elif [ "${a}" = "--prepare" ]; then
NEXT="prepare"
elif [ "${a}" = "--chdir" ]; then
NEXT="chdir"
elif [[ "${a}" == "--"* ]]; then
echo "unknown argument: \"${a}\"" 1>&2
gitbench_usage 1>&2
exit 1
else
break
fi
shift
done
if [ "${NEXT}" != "" ]; then
echo "$(basename "$0"): option requires a value: --${NEXT}" 1>&2
gitbench_usage 1>&2
exit 1
fi
# sanity check
for a in "${SANDBOX[@]}"; do
if [ ! -d "$(resources_dir)/${a}" ]; then
echo "$0: no resource '${a}' found" 1>&2
exit 1
fi
done
if [ "$REPOSITORY" != "" ]; then
if [ ! -d "$(resources_dir)/${REPOSITORY}" ]; then
echo "$0: no repository resource '${REPOSITORY}' found" 1>&2
exit 1
fi
fi
# set up our sandboxing
SANDBOX_DIR="$(temp_dir)"
if [ "${BASELINE_CLI}" != "" ]; then
BASELINE_CLI_PATH=$(fullpath "${BASELINE_CLI}")
BASELINE_RUN_SCRIPT=$(create_runscript "baseline" "${BASELINE_CLI_PATH}" "$@")
fi
TEST_CLI_PATH=$(fullpath "${TEST_CLI}")
TEST_RUN_SCRIPT=$(create_runscript "test" "${TEST_CLI_PATH}" "$@")
PREPARE_SCRIPT="$(create_preparescript)"
ARGUMENTS=("--prepare" "bash ${PREPARE_SCRIPT}" "--warmup" "${WARMUP}")
if [ "${OUTPUT_STYLE}" != "" ]; then
ARGUMENTS+=("--style" "${OUTPUT_STYLE}")
fi
if [ "${SHOW_OUTPUT}" != "" ]; then
ARGUMENTS+=("--show-output")
fi
if [ "$JSON" != "" ]; then
ARGUMENTS+=("--export-json" "${JSON}")
fi
if [ "${BASELINE_CLI}" != "" ]; then
ARGUMENTS+=("-n" "${BASELINE_CLI} $*" "bash ${BASELINE_RUN_SCRIPT}")
fi
ARGUMENTS+=("-n" "${TEST_CLI} $*" "bash ${TEST_RUN_SCRIPT}")
hyperfine "${ARGUMENTS[@]}"
rm -rf "${SANDBOX_DIR:?}"
}
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "create_text_file text_100kb 102400 &&
flush_disk_cache" \
hash-object "text_100kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "create_text_file text_10mb 10485760 &&
flush_disk_cache" \
hash-object "text_10mb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "create_text_file text_1kb 1024 &&
flush_disk_cache" \
hash-object "text_1kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "create_text_file text_100kb 102400" \
--warmup 5 \
hash-object "text_100kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "create_text_file text_10mb 10485760" \
--warmup 5 \
hash-object "text_10mb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "create_text_file text_1kb 1024" \
--warmup 5 \
hash-object "text_1kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "sandbox_repo empty_standard_repo &&
create_text_file text_100kb 102400 &&
flush_disk_cache" \
--chdir "empty_standard_repo" \
hash-object -w "../text_100kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "sandbox_repo empty_standard_repo &&
create_text_file text_10mb 10485760 &&
flush_disk_cache" \
--chdir "empty_standard_repo" \
hash-object "../text_10mb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "sandbox_repo empty_standard_repo &&
create_text_file text_1kb 1024 &&
flush_disk_cache" \
--chdir "empty_standard_repo" \
hash-object "../text_1kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "sandbox_repo empty_standard_repo &&
create_text_file text_100kb 102400" \
--warmup 5 \
--chdir "empty_standard_repo" \
hash-object "../text_100kb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "sandbox_repo empty_standard_repo &&
create_text_file text_10mb 10485760" \
--warmup 5 \
--chdir "empty_standard_repo" \
hash-object "../text_10mb"
#!/bin/bash -e
. "$(dirname "$0")/benchmark_helpers.sh"
gitbench --prepare "sandbox_repo empty_standard_repo &&
create_text_file text_1kb 1024" \
--warmup 5 \
--chdir "empty_standard_repo" \
hash-object "../text_1kb"
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment