- 14 Nov, 2016 25 commits
-
-
Concurrency fixes for the reference db
Edward Thomson committed -
Carlos Martín Nieto committed
-
On Windows we can find locked files even when reading a reference or the packed-refs file. Bubble up the error in this case as well to allow callers on Windows to retry more intelligently.
Carlos Martín Nieto committed -
At times we may try to delete a reference which a different thread has already taken care of.
Carlos Martín Nieto committed -
It does not help us to check whether the file exists before trying to unlink it since it might be gone by the time unlink is called. Instead try to remove it and handle the resulting error if it did not exist.
Carlos Martín Nieto committed -
Checking the size before we open the file descriptor can lead to the file being replaced from under us when renames aren't quite atomic, so we can end up reading too little of the file, leading to us thinking the file is corrupted.
Carlos Martín Nieto committed -
The logic simply consists of retrying for as long as the library says the data is locked, but it eventually gets through.
Carlos Martín Nieto committed -
This allows the caller to know the errors was e.g. due to the packed-refs file being already locked and they can try again later.
Carlos Martín Nieto committed -
We can reduce the duplication by cleaning up at the beginning of the loop, since it's something we want to do every time we continue.
Carlos Martín Nieto committed -
There might be a few threads or processes working with references concurrently, so fortify the code to ignore errors which come from concurrent access which do not stop us from continuing the work. This includes ignoring an unlinking error. Either someone else removed it or we leave the file around. In the former case the job is done, and in the latter case, the ref is still in a valid state.
Carlos Martín Nieto committed -
We need to save the errno, lest we clobber it in the giterr_set() call. Also add code for reporting that a path component is missing, which is a distinct failure mode.
Carlos Martín Nieto committed -
In order not to undo concurrent modifications to references, we must make sure that we only delete a loose reference if it still has the same value as when we packed it. This means we need to lock it and then compare the value with the one we put in the packed file.
Carlos Martín Nieto committed -
We can get useful information like GIT_ELOCKED out of this instead of just -1.
Carlos Martín Nieto committed -
We say it's going to work if you use a different repository in each thread. Let's do precisely that in our code instead of hoping re-using the refdb is going to work. This test does fail currently, surfacing existing bugs.
Carlos Martín Nieto committed -
giterr format
Carlos Martín Nieto committed -
transports: smart: abort on early end of stream
Carlos Martín Nieto committed -
Update THREADING for OpenSSL 1.1
Carlos Martín Nieto committed -
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
git_repository_open_ext: fix handling of $GIT_NAMESPACE
Patrick Steinhardt committed -
fileops: fix typos in `git_futils_creat_locked{,with_path}`
Patrick Steinhardt committed -
Patrick Steinhardt committed
-
- 11 Nov, 2016 3 commits
-
-
The existing code would set a namespace of "" (empty string) with GIT_NAMESPACE unset. In a repository where refs/heads/namespaces/ exists, that can produce incorrect results. Detect that case and avoid setting the namespace at all. Since that makes the last assignment to error conditional, and the previous assignment can potentially get GIT_ENOTFOUND, set error to 0 explicitly to prevent the call from incorrectly failing with GIT_ENOTFOUND.
Josh Triplett committed -
Patrick Steinhardt committed
-
We're recently trying to upgrade to the current master of libgit2 in Cargo but we're unfortunately hitting a segfault in one of our tests. This particular test is just a small smoke test that https works (e.g. it's configured in libgit2). It attempts to clone from a URL which simply immediately drops connections after they're accepted (e.g. terminate abnormally). We expect to see a standard error from libgit2 but unfortunately we're seeing a segfault. This segfault is happening inside of the `wait_for` function of `curl_stream.c` at the line `FD_SET(fd, &errfd)` because `fd` is -1. This ends up doing an out-of-bounds array access that faults the program. I tracked back to where this -1 came from to the line here (returned by `CURLINFO_LASTSOCKET`) and added a check to return an error.
Alex Crichton committed
-
- 04 Nov, 2016 4 commits
-
-
global: synchronize initialization and shutdown with pthreads
Patrick Steinhardt committed -
Patrick Steinhardt committed
-
The code correctly detects that forced creation of a branch on a nonbare repo should not be able to overwrite a branch which is the HEAD reference. But there's no reason to prevent this on a bare repo, and in fact, git allows this. I.e., git branch -f master new_sha works on a bare repo with HEAD set to master. This change fixes that problem, and updates tests so that, for this case, both the bare and nonbare cases are checked for correct behavior.
John Fultz committed -
Carlos Martín Nieto committed
-
- 02 Nov, 2016 8 commits
-
-
Carlos Martín Nieto committed
-
add support for OpenSSL 1.1.0 for BIO filter
Carlos Martín Nieto committed -
We need to include the initialisation and construction functions in all backend, so we include this header when building against SecureTransport and WinHTTP as well.
Carlos Martín Nieto committed -
pack: fix race in pack_entry_find_offset
Carlos Martín Nieto committed -
For older versions we can fall back on the deprecated ASN1_STRING_data.
Carlos Martín Nieto committed -
We want to program against the interface, so recreate it when we compile against pre-1.1 versions.
Carlos Martín Nieto committed -
In `pack_entry_find_offset`, we try to find the offset of a certain object in the pack file. To do so, we first assert if the packfile has already been opened and open it if not. Opening the packfile is guarded with a mutex, so concurrent access to this is in fact safe. What is not thread-safe though is our calculation of offsets inside the packfile. Assume two threads calling `pack_entry_find_offset` at the same time. We first calculate the offset and index location and only then determine if the pack has already been opened. If so, we re-calculate the offset and index address. Now the case for two threads: thread 1 first calculates the addresses and is subsequently suspended. The second thread will now call `pack_index_open` and initialize the pack file, calculating its addresses correctly. When the first thread is resumed now, he'll see that the pack file has already been initialized and will happily proceed with the addresses it has already calculated before the check. As the pack file was not initialized before, these addresses are bogus. Fix the issue by only calculating the addresses after having checked if the pack file is open.
Patrick Steinhardt committed -
When trying to receive packets from the remote, we loop until either an error distinct to `GIT_EBUFS` occurs or until we successfully parsed the packet. This does not honor the case where we are looping over an already closed socket which has no more data, leaving us in an infinite loop if we got a bogus packet size or if the remote hang up. Fix the issue by returning `GIT_EEOF` when we cannot read data from the socket anymore.
Patrick Steinhardt committed
-