- 24 Feb, 2020 2 commits
-
-
In the NTLM authentication code, we accidentally use strdup(3P) and strndup(3P) instead of our own wrappers git__strdup and git__strndup, respectively. Fix the issue by using our own functions.
Patrick Steinhardt committed -
The ntlmclient dependency defines htonll on Linux-based systems, only. As a result, non-Linux systems will run into compiler and/or linker errors due to undefined symbols. Fix this issue for FreeBSD, OpenBSD and SunOS/OpenSolaris by including the proper headers and defining the symbol accordingly.
Patrick Steinhardt committed
-
- 19 Feb, 2020 4 commits
-
-
Release 0.99
Patrick Steinhardt committed -
Release script
Patrick Steinhardt committed -
This commit also switches our SOVERSION to be "$MAJOR.$MINOR" instead of "$MINOR", only. This is in preparation of v1.0, where the previous scheme would've stopped working in an obvious way.
Edward Thomson committed -
Give the release a name, "Torschlusspanik" (the fear that time is running out to act). Indeed, the time is running out for changes to be included in v1.0.
Edward Thomson committed
-
- 18 Feb, 2020 8 commits
-
-
azure: fix ARM32 builds by replacing gosu(1)
Edward Thomson committed -
openssl: fix Valgrind issues in nightly builds
Patrick Steinhardt committed -
fuzzers: Fix the documentation
Patrick Steinhardt committed -
In our test case object::cache::fast_thread_rush, we're creating 100 concurrent threads opening a repository and reading objects from it. This test actually fails on ARM32 with an out-of-memory error, which isn't entirely unexpected. Work around the issue by halving the number of threads.
Patrick Steinhardt committed -
Our nightly builds are currently failing due to our ARM-based jobs. These jobs crash immediately when entering the Docker container with a exception thrown by Go's language runtime. As we're able to successfully builds the Docker images in previous steps, it's unlikely to be a bug in Docker itself. Instead, this exception is thrown by gosu(1), which is a Go-based utility to drop privileges and run by our entrypoint. Fix the issue by dropping gosu(1) in favor of sudo(1).
Patrick Steinhardt committed -
Our two Docker build instructions for Xenial and Bionic have diverged a bit. Let's re-synchronize them with each other to make them as similar as possible.
Patrick Steinhardt committed -
The build step for our Docker images currently succeeds even if building the Docker image fails due to missing && chains in the build script. Fix this by adding them in.
Patrick Steinhardt committed -
Since migrating to Docker containings for our build and test infrastructure, we do not use the "setup-linux.sh" script anymore. Remove it to avoid any confusion.
Patrick Steinhardt committed
-
- 15 Feb, 2020 1 commit
-
-
Some of the commands are now out of date.
lhchavez committed
-
- 11 Feb, 2020 2 commits
-
-
As OpenSSL loves using uninitialized bytes as another source of entropy, we need to mark them as defined so that Valgrind won't complain about use of these bytes. Traditionally, we've been using the macro `VALGRIND_MAKE_MEM_DEFINED` provided by Valgrind, but starting with OpenSSL 1.1 the code doesn't compile anymore due to `struct SSL` having become opaque. As such, we also can't set it as defined anymore, as we have no way of knowing its size. Let's change gears instead by just swapping out the allocator functions of OpenSSL with our own ones. The twist is that instead of calling `malloc`, we just call `calloc` to have the bytes initialized automatically. Next to soothing Valgrind, this approach has the benefit of being completely agnostic of the memory sanitizer and is neatly contained at a single place. Note that we shouldn't do this for non-Valgrind builds. As we cannot set up memory functions for a given SSL context, only, we need to swap them at a global context. Furthermore, as it's possible to call `OPENSSL_set_mem_functions` once only, we'd prevent users of libgit2 to set up their own allocators.
Patrick Steinhardt committed -
OpenSSL doesn't initialize bytes on purpose in order to generate additional entropy. Valgrind isn't too happy about that though, causing it to generate warninings about various issues regarding use of uninitialized bytes. We traditionally had some infrastructure to silence these errors in our OpenSSL stream implementation, where we invoke the Valgrind macro `VALGRIND_MAKE_MEMDEFINED` in various callbacks that we provide to OpenSSL. Naturally, we only include these instructions if a preprocessor define "VALGRIND" is set, and that in turn is only set if passing "-DVALGRIND" to CMake. We do that in our usual Azure pipelines, but we in fact forgot to do this in our nightly build. As a result, we get a slew of warnings for these nightly builds, but not for our normal builds. To fix this, we could just add "-DVALGRIND" to our nightly builds. But starting with commit d827b11b (tests: execute leak checker via CTest directly, 2019-06-28), we do have a secondary variable that directs whether we want to use memory sanitizers for our builds. As such, every user wishing to use Valgrind for our tests needs to pass both options "VALGRIND" and "USE_LEAK_CHECKER", which is cumbersome and error prone, as can be seen by our own builds. Instead, let's consolidate this into a single option, removing the old "-DVALGRIND" one. Instead, let's just add the preprocessor directive if USE_LEAK_CHECKER equals "valgrind" and remove "-DVALGRIND" from our own pipelines.
Patrick Steinhardt committed
-
- 08 Feb, 2020 2 commits
-
-
azure: fix misleading messages printed to stderr being
Edward Thomson committed -
tests: iterator: fix iterator expecting too few items
Edward Thomson committed
-
- 07 Feb, 2020 13 commits
-
-
The current release process is not documented in any way. As a result, it's not obvious how releases should be done at all, like e.g. which locations need adjusting. To fix this, let's introduce a new script that shall from now on be used to do all releases. As input it gets the tree that shall be released, the repository in which to do the release, credentials to authenticate against GitHub and the new version. E.g. executing the following will create a new release v0.32: $ ./script/release.py 0.32.0 --user pks-t --password **** While the password may currently be your usual GitLab password, it's recommended to use a personal access token intead. The script will then perform the following steps: 1. Verify that "include/git2/version.h" matches the new version. 2. Verify that "docs/changelog.md" has a section for that new version. 3. Extract the changelog entries for the current release from "docs/changelog.md". 4. Generate two archives in "tar.gz" and "zip" format via "git archive" from the tree passed by the user. If no tree was passed, we will use "HEAD". 5. Create the GitHub release using the extracted changelog entries as well as tag and name information derived from the version passed by the used. 6. Upload both code archives to that release. This should cover all steps required for a new release and thus ensures that nothing is missing that shouldn't be.
Patrick Steinhardt committed -
Python's PEP 8 specifies that one shall use spaces instead of tabs as coding style, and we actually honor that currently. Our EditorConfig does not special-case Python scripts, though, which is why we end up with our C coding style and thus with tabs. Special-case "*.py" files to override that default with spaces to fix this.
Patrick Steinhardt committed -
The testcase iterator::workdir::filesystem_gunk sets up quite a lot of directories, which is why it only runs in case GITTEST_INVASIVE_SPEED is set in the environment. Because we do not run our default CI with this variable, we didn't notice commit 852c83ee (refs: refuse to delete HEAD, 2020-01-15) breaking the test as it introduced a new reference to the "testrepo" repository. Fix the oversight by increasing the number of expected iterator items.
Patrick Steinhardt committed -
In order to properly tear down the test environment, we will kill git-daemon(1) if we've exercised it. As git-daemon(1) is spawned as a background process, it is still owned by the shell and thus killing it later on will print a termination message to the shell's stderr, causing Azure to report it as an error. Fix this by disowning the background process.
Patrick Steinhardt committed -
The Docker entrypoint currently creates the libgit2 user with "useradd --create-home". As we start the Docker container with two volumes pointing into "/home/libgit2/", the home directory will already exist. While useradd(1) copes with this just fine, it will print error messages to stderr which end up as failures in our Azure pipelines. Fix this by simply removing the "--create-home" parameter.
Patrick Steinhardt committed -
Without the "--silent" parameter, curl will print a progress meter to stderr. Azure has the nice feature of interpreting any output to stderr as errors with a big red warning towards the end of the build. Let's thus silence curl to not generate any misleading messages.
Patrick Steinhardt committed -
When building dependencies for our Docker images, we first download the sources to disk first, unpack them and finally remove the archive again. This can be sped up by piping the downloading archive into tar(1) directly to parallelize both tasks. Furthermore, let's silence curl(1) to not print to status information to stderr, which tends to be interpreted as errors by Azure Pipelines.
Patrick Steinhardt committed -
transports: http: fix custom headers not being applied
Patrick Steinhardt committed -
azure: fix Coverity pipeline
Patrick Steinhardt committed -
In commit b9c5b15a (http: use the new httpclient, 2019-12-22), the HTTP code got refactored to extract a generic HTTP client that operates independently of the Git protocol. Part of refactoring was the creation of a new `git_http_request` struct that encapsulates the generation of requests. Our Git-specific HTTP transport was converted to use that in `generate_request`, but during the process we forgot to set up custom headers for the `git_http_request` and as a result we do not send out these headers anymore. Fix the issue by correctly setting up the request's custom headers and add a test to verify we correctly send them.
Patrick Steinhardt committed -
There's several issues with our Coverity builds, like e.g. missing wget in our containers. Simplify our Coverity pipeline and fix these issues.
Patrick Steinhardt committed -
Back in commit 5a6740e7 (azure: build Docker images as part of the pipeline, 2019-08-02), we have converted our pipelines to use self-built Docker images to ease making changes to our Dockerfiles. The commit didn't adjust our Coverity pipeline, though, so let's do this now.
Patrick Steinhardt committed -
In commit bbc0b20b (azure: fix Coverity's build due to wrong container name, 2019-08-02), Coverity builds were fixed to use the correct container names. Unfortunately, the "fix" completely broke our Coverity builds due to using wrong syntax for the Docker task. Let's fix this by using "imageName" instead of the Docker dict.
Patrick Steinhardt committed
-
- 06 Feb, 2020 1 commit
-
-
azure: tests: re-run flaky proxy tests
Patrick Steinhardt committed
-
- 04 Feb, 2020 2 commits
-
-
While we already do have logic to re-run flaky tests, the FAILED variable currently does not get reset to "0". As a result, successful reruns will still cause the test to be registered as failed. Fix this by resetting the variable accordingly.
Patrick Steinhardt committed -
The proxy tests regularly fail in our CI environment. Unfortunately, this is expected due to the network layer. Thus, let's re-try the proxy tests up to five times in case they fail.
Patrick Steinhardt committed
-
- 01 Feb, 2020 1 commit
-
-
fetchhead: strip credentials from remote URL
Edward Thomson committed
-
- 31 Jan, 2020 3 commits
-
-
If fetching from an anonymous remote via its URL, then the URL gets written into the FETCH_HEAD reference. This is mainly done to give valuable context to some commands, like for example git-merge(1), which will put the URL into the generated MERGE_MSG. As a result, what gets written into FETCH_HEAD may become public in some cases. This is especially important considering that URLs may contain credentials, e.g. when cloning 'https://foo:bar@example.com/repo' we persist the complete URL into FETCH_HEAD and put it without any kind of sanitization into the MERGE_MSG. This is obviously bad, as your login data has now just leaked as soon as you do git-push(1). When writing the URL into FETCH_HEAD, upstream git does strip credentials first. Let's do the same by trying to parse the remote URL as a "real" URL, removing any credentials and then re-formatting the URL. In case this fails, e.g. when it's a file path or not a valid URL, we just fall back to using the URL as-is without any sanitization. Add tests to verify our behaviour.
Patrick Steinhardt committed -
azure-pipelines: properly expand negotiate passwords
Edward Thomson committed -
To allow testing against a Kerberos instance, we have added variables for the Kerberos password to allow authentication against LIBGIT2.ORG in commit e5fb5fe5 (ci: perform SPNEGO tests, 2019-10-20). To set up the password, we assign "GITTEST_NEGOTIATE_PASSWORD=$(GITTEST_NEGOTIATE_PASSWORD)" in the environmentVariables section which is then passed through to a template. As the template does build-time expansion of the environment variables, it will expand the above line verbosely, and due to the envVar section not doing any further expansion the password variable will end up with the value "$(GITTEST_NEGOTIATE_PASSWORD)" in the container's environment. Fix this fixed by doing expansion of GITTEST_NEGOTIATE_PASSWORD at build-time, as well.
Patrick Steinhardt committed
-
- 30 Jan, 2020 1 commit
-
-
cred: change enum to git_credential_t and GIT_CREDENTIAL_*
Patrick Steinhardt committed
-