- 01 Dec, 2018 1 commit
-
-
Use the new object_type enumeration names within the codebase.
Edward Thomson committed
-
- 10 Jun, 2018 1 commit
-
-
Patrick Steinhardt committed
-
- 09 Feb, 2018 1 commit
-
-
Return an error to the caller when we can't create an object header for some reason (printf failure) instead of simply asserting.
Edward Thomson committed
-
- 02 Feb, 2018 3 commits
-
-
Only run the large file tests on 64 bit platforms. Even though we support streaming reads on objects, and do not need to fit them in memory, we use `size_t` in various places to reflect the size of an object.
Edward Thomson committed -
Test that we can read_header on large blobs. This should succeed on all platforms since we read only a few bytes into memory to be able to parse the header.
Edward Thomson committed -
Since some test situations may have generous disk space, but limited RAM (eg hosted build agents), test that we can stream a large file into a loose object, and then stream it out of the loose object storage.
Edward Thomson committed
-
- 20 Dec, 2017 3 commits
-
-
Writing very large files may be slow, particularly on inefficient filesystems and when running instrumented code to detect invalid memory accesses (eg within valgrind or similar tools). Introduce `GITTEST_SLOW` so that tests that are slow can be skipped by the CI system.
Edward Thomson committed -
Check the size of objects being read from the loose odb backend and reject those that would not fit in memory with an error message that reflects the actual problem, instead of error'ing later with an unintuitive error message regarding truncation or invalid hashes.
Edward Thomson committed -
Introduce a test for very large objects in the ODB. Write a large object (5 GB) and ensure that the write succeeds and provides us the expected object ID. Introduce a test that writes that file and ensures that we can subsequently read it.
Edward Thomson committed
-