- 15 Jun, 2019 1 commit
-
-
The majority of functions are named `from_something` (with an underscore) instead of `fromsomething`. Update the blob functions for consistency with the rest of the library.
Edward Thomson committed
-
- 20 Feb, 2019 1 commit
-
-
This change frees a copy of a cached object in odb_otype_fast().
lhchavez committed
-
- 22 Jan, 2019 1 commit
-
-
Move to the `git_error` name in the internal API for error-related functions.
Edward Thomson committed
-
- 20 Jan, 2019 1 commit
-
-
In include/git2/odb.h it states that callback can also return positive value which should break looping. Implementations of git_odb_foreach() and pack_backend__foreach() did not respect that.
Marijan Šuflaj committed
-
- 01 Dec, 2018 1 commit
-
-
Use the new object_type enumeration names within the codebase.
Edward Thomson committed
-
- 10 Jun, 2018 1 commit
-
-
Patrick Steinhardt committed
-
- 23 Mar, 2018 2 commits
-
-
In commit 7ec7aa4a (odb: assert on logic errors when writing objects, 2018-02-01), the check for whether we are trying to overflowing the fake stream buffer was changed from returning an error to raising an assert. The conversion forgot though that the logic around `assert`s are basically inverted. Previously, if the statement stream->written + len > steram->size evaluated to true, we would return a `-1`. Now we are asserting that this statement is true, and in case it is not we will raise an error. So the conversion to the `assert` in fact changed the behaviour to the complete opposite intention. Fix the assert by inverting its condition again and add a regression test.
Patrick Steinhardt committed -
Our mempack ODB backend has no test coverage at all right now. Add a simple test suite to at least have some coverage of the most basic operations on the ODB.
Patrick Steinhardt committed
-
- 09 Feb, 2018 1 commit
-
-
Return an error to the caller when we can't create an object header for some reason (printf failure) instead of simply asserting.
Edward Thomson committed
-
- 02 Feb, 2018 5 commits
-
-
Only run the large file tests on 64 bit platforms. Even though we support streaming reads on objects, and do not need to fit them in memory, we use `size_t` in various places to reflect the size of an object.
Edward Thomson committed -
Test that we can read_header on large blobs. This should succeed on all platforms since we read only a few bytes into memory to be able to parse the header.
Edward Thomson committed -
Edward Thomson committed
-
Edward Thomson committed
-
Since some test situations may have generous disk space, but limited RAM (eg hosted build agents), test that we can stream a large file into a loose object, and then stream it out of the loose object storage.
Edward Thomson committed
-
- 26 Jan, 2018 1 commit
-
-
The null OID (hash with all zeroes) indicates a missing object in upstream git and is thus not a valid object ID. Add defensive measurements to avoid writing such a hash to the object database in the very unlikely case where some data results in the null OID. Furthermore, add shortcuts when reading the null OID from the ODB to avoid ever returning an object when a faulty repository may contain the null OID.
Patrick Steinhardt committed
-
- 20 Dec, 2017 3 commits
-
-
Writing very large files may be slow, particularly on inefficient filesystems and when running instrumented code to detect invalid memory accesses (eg within valgrind or similar tools). Introduce `GITTEST_SLOW` so that tests that are slow can be skipped by the CI system.
Edward Thomson committed -
Check the size of objects being read from the loose odb backend and reject those that would not fit in memory with an error message that reflects the actual problem, instead of error'ing later with an unintuitive error message regarding truncation or invalid hashes.
Edward Thomson committed -
Introduce a test for very large objects in the ODB. Write a large object (5 GB) and ensure that the write succeeds and provides us the expected object ID. Introduce a test that writes that file and ensures that we can subsequently read it.
Edward Thomson committed
-
- 13 Jun, 2017 4 commits
-
-
Introduce a new test suite "odb::backend::simple", which utilizes the fake backend to exercise the ODB abstraction layer. While such tests already exist for the case where multiple backends are put together, no direct testing for functionality with a single backend exist yet.
Patrick Steinhardt committed -
The fake backend currently implements all reading functions except for the `exists_prefix` one. Implement it to enable further testing of the ODB layer.
Patrick Steinhardt committed -
The `search_object` function takes the OID length as one of its parameters, where its maximum length is `GIT_OID_HEXSZ`. The `exists` function of the fake backend used `GIT_OID_RAWSZ` though, leading to only the first half of the OID being used when finding the correct object.
Patrick Steinhardt committed -
In order to be able to test the ODB prefix functions, we need to be able to detect ambiguous prefixes in case multiple objects with the same prefix exist in the fake ODB. Extend `search_object` to detect ambiguous queries and have callers return its error code instead of always returning `GIT_ENOTFOUND`.
Patrick Steinhardt committed
-
- 12 Jun, 2017 4 commits
-
-
Previous to pulling out and extending the fake backend, it was quite cumbersome to write tests for very specific scenarios regarding backends. But as we have made it more generic, it has become much easier to do so. As such, this commit adds multiple tests for scenarios with multiple backends for the ODB. The changes also include a test for a very targeted scenario. When one backend found a matching object via `read_prefix`, but the last backend returns `GIT_ENOTFOUND` and when object hash verification is turned off, we fail to reset the error code to `GIT_OK`. This causes us to segfault later on, when doing a double-free on the returned object.
Patrick Steinhardt committed -
Right now, the fake backend is quite restrained in the way how it works: we pass it an OID which it is to return later as well as an error code we want it to return. While this is sufficient for existing tests, we can make the fake backend a little bit more generic in order to allow us testing for additional scenarios. To do so, we change the backend to not accept an error code and OID which it is to return for queries, but instead a simple array of OIDs with their respective blob contents. On each query, the fake backend simply iterates through this array and returns the first matching object.
Patrick Steinhardt committed -
In order to make the fake backend more useful, we want to enable it holding multiple object references. To do so, we need to decouple it from the single fake OID it currently holds, which we simply move up into the calling tests.
Patrick Steinhardt committed -
The fake backend used by the test suite `odb::backend::nonrefreshing` is useful to have some low-level tests for the ODB layer. As such, we move the implementation into its own `backend_helpers` module.
Patrick Steinhardt committed
-
- 08 Jun, 2017 1 commit
-
-
Initially, the setting has been solely used to enable the use of `fsync()` when creating objects. Since then, the use has been extended to also cover references and index files. As the option is not yet part of any release, we can still correct this by renaming the option to something more sensible, indicating not only correlation to objects. This commit renames the option to `GIT_OPT_ENABLE_FSYNC_GITDIR`. We also move the variable from the object to repository source code.
Patrick Steinhardt committed
-
- 28 Apr, 2017 2 commits
-
-
The upstream git.git project verifies objects when looking them up from disk. This avoids scenarios where objects have somehow become corrupt on disk, e.g. due to hardware failures or bit flips. While our mantra is usually to follow upstream behavior, we do not do so in this case, as we never check hashes of objects we have just read from disk. To fix this, we create a new error class `GIT_EMISMATCH` which denotes that we have looked up an object with a hashsum mismatch. `odb_read_1` will then, after having read the object from its backend, hash the object and compare the resulting hash to the expected hash. If hashes do not match, it will return an error. This obviously introduces another computation of checksums and could potentially impact performance. Note though that we usually perform I/O operations directly before doing this computation, and as such the actual overhead should be drowned out by I/O. Running our test suite seems to confirm this guess. On a Linux system with best-of-five timings, we had 21.592s with the check enabled and 21.590s with the ckeck disabled. Note though that our test suite mostly contains very small blobs only. It is expected that repositories with bigger blobs may notice an increased hit by this check. In addition to a new test, we also had to change the odb::backend::nonrefreshing test suite, which now triggers a hashsum mismatch when looking up the commit "deadbeef...". This is expected, as the fake backend allocated inside of the test will return an empty object for the OID "deadbeef...", which will obviously not hash back to "deadbeef..." again. We can simply adjust the hash to equal the hash of the empty object here to fix this test.
Patrick Steinhardt committed -
In the odb::backend::nonrefreshing test suite, we set up a fake backend so that we are able to determine if backend functions are called correctly. During the setup, we also parse an OID which is later on used to read out the pseudo-object. While this procedure works right now, it will create problems later when we implement hash verification for looked up objects. The current OID ("deadbeef") will not match the hash of contents we give back to the ODB layer and thus cannot be verified. Make the hash configurable so that we can simply switch the returned for single tests.
Patrick Steinhardt committed
-
- 05 Apr, 2017 1 commit
-
-
Instead of failing to set the timestamp of a read-only file (like any object file), set it writable temporarily to update the timestamp.
Edward Thomson committed
-
- 03 Mar, 2017 1 commit
-
-
Freshen the tree object that a commit points to during commit time.
Edward Thomson committed
-
- 02 Mar, 2017 1 commit
-
-
Edward Thomson committed
-
- 28 Feb, 2017 2 commits
-
-
Rename `GIT_OPT_ENABLE_SYNCHRONIZED_OBJECT_CREATION` -> `GIT_OPT_ENABLE_SYNCHRONOUS_OBJECT_CREATION`.
Edward Thomson committed -
Introduce a simple counter that `p_fsync` implements. This is useful for ensuring that `p_fsync` is called when we expect it to be, for example when we have enabled an odb backend to perform `fsync`s when writing objects.
Edward Thomson committed
-
- 29 Dec, 2016 1 commit
-
-
Edward Thomson committed
-
- 06 Oct, 2016 1 commit
-
-
Introduce some tests that show some commits, while hiding some commits that have a timestamp older than the common ancestors of these two commits.
Edward Thomson committed
-
- 05 Aug, 2016 1 commit
-
-
Only provide the empty tree internally, which matches git's behavior. If we provide the empty blob then any users trying to write it with libgit2 would omit it from actually landing in the odb, which appear to git proper as a broken repository (missing that object).
Edward Thomson committed
-
- 04 Aug, 2016 2 commits
-
-
Since writing multiple objects may all already exist in a single packfile, avoid freshening that packfile repeatedly in a tight loop. Instead, only freshen pack files every 2 seconds.
Edward Thomson committed -
When writing an object, we calculate its OID and see if it exists in the object database. If it does, we need to freshen the file that contains it.
Edward Thomson committed
-
- 09 Mar, 2016 1 commit
-
-
The old implementation had two issues: 1. OIDs that were too short as to be ambiguous were not being handled properly. 2. If the last OID to expand in the array was missing from the ODB, we would leak a `GIT_ENOTFOUND` error code from the function.
Vicent Marti committed
-