- 12 Jun, 2017 7 commits
-
-
Previous to pulling out and extending the fake backend, it was quite cumbersome to write tests for very specific scenarios regarding backends. But as we have made it more generic, it has become much easier to do so. As such, this commit adds multiple tests for scenarios with multiple backends for the ODB. The changes also include a test for a very targeted scenario. When one backend found a matching object via `read_prefix`, but the last backend returns `GIT_ENOTFOUND` and when object hash verification is turned off, we fail to reset the error code to `GIT_OK`. This causes us to segfault later on, when doing a double-free on the returned object.
Patrick Steinhardt committed -
Right now, the fake backend is quite restrained in the way how it works: we pass it an OID which it is to return later as well as an error code we want it to return. While this is sufficient for existing tests, we can make the fake backend a little bit more generic in order to allow us testing for additional scenarios. To do so, we change the backend to not accept an error code and OID which it is to return for queries, but instead a simple array of OIDs with their respective blob contents. On each query, the fake backend simply iterates through this array and returns the first matching object.
Patrick Steinhardt committed -
In order to make the fake backend more useful, we want to enable it holding multiple object references. To do so, we need to decouple it from the single fake OID it currently holds, which we simply move up into the calling tests.
Patrick Steinhardt committed -
The fake backend used by the test suite `odb::backend::nonrefreshing` is useful to have some low-level tests for the ODB layer. As such, we move the implementation into its own `backend_helpers` module.
Patrick Steinhardt committed -
smart_protocol: fix parsing of server ACK responses
Edward Thomson committed -
odb_read_prefix: reset error in backends loop
Patrick Steinhardt committed -
When looking for an object by prefix, we query all the backends so that we can ensure that there is no ambiguity. We need to reset the `error` value between backends; otherwise the first backend may find an object by prefix, but subsequent backends may not. If we do not reset the `error` value then it will remain at `GIT_ENOTFOUND` and `read_prefix_1` will fail, despite having actually found an object.
Edward Thomson committed
-
- 11 Jun, 2017 6 commits
-
-
Update version number to 0.26
Edward Thomson committed -
Edward Thomson committed
-
Edward Thomson committed
-
CHANGELOG: add various changes introduced since v0.25
Edward Thomson committed -
Ensure packfiles with different contents have different names
Edward Thomson committed -
Update to forced checkout and untracked files
Edward Thomson committed
-
- 10 Jun, 2017 4 commits
-
-
Only ignore `EBUSY` from `rmdir` when the `GIT_RMDIR_SKIP_NONEMPTY` bit is set.
Edward Thomson committed -
When deleting a directory during checkout, do not simply delete the directory, since there may be untracked files. Instead, go into the iterator and examine each file. In the original code (the code with the faulty assumption), we look to see if there's an index entry beneath the directory that we want to remove. Eg, it looks to see if we have a workdir entry foo and an index entry foo/bar.txt. If this is not the case, then the working directory must have precious files in that directory. This part is okay. The part that's not okay is if there is an index entry foo/bar.txt. It just blows away the whole damned directory. That's not cool. Instead, by simply pushing the directory itself onto the stack and iterating each entry, we will deal with the files one by one - whether they're in the index (and can be force removed) or not (and are precious). The original code was a bad optimization, assuming that we didn't need to git_iterator_advance_into if there was any index entry in the folder. That's wrong - we could have optimized this iff all folder entries are in the index. Instead, we need to simply dig into the directory and analyze its entries.
Edward Thomson committed -
If the `GIT_CHECKOUT_FORCE` flag is given to any of the `git_checkout` invocations, we remove files which were previously staged. But while doing so, we unfortunately also remove unstaged files in a directory which contains at least one staged file, resulting in potential data loss. This commit adds two tests to verify behavior.
Patrick Steinhardt committed -
Fix ACK parsing in wait_while_ack() internal function. This patch handles the case where multi_ack_detailed mode sends 'ready' ACKs. The existing functionality would bail out too early, thus causing the processing of the ensuing packfile to fail if/when 'ready' ACKs were sent.
Roger Gee committed
-
- 09 Jun, 2017 1 commit
-
-
Patrick Steinhardt committed
-
- 08 Jun, 2017 15 commits
-
-
settings: rename `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`
Edward Thomson committed -
Initially, the setting has been solely used to enable the use of `fsync()` when creating objects. Since then, the use has been extended to also cover references and index files. As the option is not yet part of any release, we can still correct this by renaming the option to something more sensible, indicating not only correlation to objects. This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also move the variable from the object to repository source code.
Patrick Steinhardt committed -
Buffer growing cleanups
Edward Thomson committed -
Coverity fixes
Edward Thomson committed -
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
Patrick Steinhardt committed
-
The `git_buf_init` function has an optional length parameter, which will cause the buffer to be initialized and allocated in one step. This can be used instead of static initialization with `GIT_BUF_INIT` followed by a `git_buf_grow`. This patch does so for two functions where it is applicable.
Patrick Steinhardt committed -
Both the `git_buf_init` and `git_buf_attach` functions may call `git_buf_grow` in case they were given an allocation length as parameter. As such, it is possible for these functions to fail when we run out of memory. While it won't probably be used anytime soon, it does indeed make sense to also record this fact by returning an error code from both functions. As they belong to the internal API only, this change does not break our interface.
Patrick Steinhardt committed -
The `ENSURE_SIZE` macro can be used to grow a buffer if its currently allocated size does not suffice a required target size. While most of the code already uses this macro, the `git_buf_join` and `git_buf_join3` functions do not yet use it. Due to the macro first checking whether we have to grow the buffer at all, this has the benefit of saving a function call when it is not needed. While this is nice to have, it will probably not matter at all performance-wise -- instead, this only serves for consistency across the code.
Patrick Steinhardt committed -
While the `ENSURE_SIZE` macro gets a reference to both the buffer that is to be resized and a new size, we were not consistently referencing the passed buffer, but instead a variable `buf`, which is not passed in. Funnily enough, we never noticed because our buffers seem to always be named `buf` whenever the macro was being used. Fix the macro by always using the passed-in buffer. While at it, add braces around all mentions of passed-in variables as should be done with macros to avoid subtle errors. Found-by: Edward Thompson
Patrick Steinhardt committed -
The function `git_buf_try_grow` consistently calls `giterr_set_oom` whenever growing the buffer fails due to insufficient memory being available. So in fact, we do not have to do this ourselves when a call to any buffer-growing function has failed due to an OOM situation. But we still do so in two functions, which this patch cleans up.
Patrick Steinhardt committed -
SHA1DC update
Edward Thomson committed
-
- 07 Jun, 2017 7 commits
-
-
The updated SHA1DC library allows us to use custom includes instead of using standard includes. Due to requirements with cross-platform, we provide some custom system includes files like for example the "stdint.h" file on Win32. Because of this, we want to make sure to avoid breaking cross-platform compatibility when SHA1DC is enabled. To use the new mechanism, we can simply define `SHA1DC_NO_STANDARD_INCLUDES`. Furthermore, we can specify custom include files via two defines, which we now use to include our "common.h" header.
Patrick Steinhardt committed -
This updates our version of SHA1DC to e139984 (Merge pull request #35 from lidl/master, 2017-05-30).
Patrick Steinhardt committed -
Fix path computations for compressed index entries
Edward Thomson committed -
(Temporarily) disable UNC tests
Edward Thomson committed -
(Temporarily) disable UNC path tests to work around AppVeyor issues.
Edward Thomson committed -
Fix build with libressl
Patrick Steinhardt committed -
OpenSSL v1.1 has introduced a new way of initializing the library without having to call various functions of different subsystems. In libgit2, we have been adapting to that change with 88520151 (openssl_stream: use new initialization function on OpenSSL version >=1.1, 2017-04-07), where we added an #ifdef depending on the OpenSSL version. This change broke building with libressl, though, which has not changed its API in the same way. Fix the issue by expanding the #ifdef condition to use the old way of initializing with libressl. Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
Marc-Antoine Perennou committed
-