- 21 Aug, 2018 1 commit
-
-
Otherwise we return a NULL context, which will get dereferenced in apply_credentials.
Etienne Samson committed
-
- 20 Aug, 2018 3 commits
-
-
Otherwise we'll return stack data to the caller.
Etienne Samson committed -
In case there was nothing to parse in the buf, we'd return uninitialized stack data.
Etienne Samson committed -
At line 594, we do this : if (error < 0) return error; but if nothing was pushed in a GIT_SORT_TIME revwalk, we'd return uninitialized stack data.
Etienne Samson committed
-
- 19 Aug, 2018 3 commits
-
-
Fix leak in index.c
Edward Thomson committed -
threads::diff: use separate git_repository objects
Edward Thomson committed -
Our thread policies state that we cannot re-use the `git_repository` across threads. Our tests cannot deviate from that. Courtesy of Ximin Luo, https://github.com/infinity0: https://github.com/libgit2/libgit2/issues/4753#issuecomment-412247757
Edward Thomson committed
-
- 17 Aug, 2018 1 commit
-
-
travis: remove Coverity cron job
Edward Thomson committed
-
- 16 Aug, 2018 4 commits
-
-
abyss7 committed
-
parse: Do not initialize the content in context to NULL
Patrick Steinhardt committed -
config_file: Don't crash on options without a section
Patrick Steinhardt committed -
With the recent addition of VSTS to our CI infrastructure, we now have two cron jobs running regular Coverity analysis. It doesn't really make a lot of sense to upload two different analysis on our sources to Corverity, though: - in the worst case, Coverity will be repeatedly confused when different sets of sources get analyzed and uploaded - in the best case, nothing is gained because the sources have already been analyzed via the other job Let's just use a single cron job for Coverity. Considering that VSTS seems to be the more beefy and flexible platform, it is more likely to be our future target CI platform. Thus, we retain its support for Coverity and instead remove it from Travis.
Patrick Steinhardt committed
-
- 14 Aug, 2018 4 commits
-
-
ci: Correct the status code check so Coverity doesn't force-fail Travis
Edward Thomson committed -
Otherwise you get something like Emitted 525 C/C++ compilation units (100%) successfully 525 C/C++ compilation units (100%) are ready for analysis The cov-build utility completed successfully. Build successfully submitted. Received error code 200 from Coverity travis_time:end:14cf6373:start=1534254309066933889,finish=1534254728190974302,duration=419124040413 The command "if [ -n "$COVERITY" ]; then ../ci/coverity.sh; fi" exited with 1. travis_time:start:01ed61d4 $ if [ -z "$COVERITY" ]; then ../ci/build.sh && ../ci/test.sh; fi travis_time:end:01ed61d4:start=1534254728197560961,finish=1534254728202711214,duration=5150253 The command "if [ -z "$COVERITY" ]; then ../ci/build.sh && ../ci/test.sh; fi" exited with 0. Done. Your build exited with 1.
Etienne Samson committed -
Nelson Elhage committed
-
Nelson Elhage committed
-
- 09 Aug, 2018 4 commits
-
-
ci: remove appveyor
Edward Thomson committed -
Edward Thomson committed
-
diff: fix OOM on AIX when finding similar deltas in empty diff
Edward Thomson committed -
The function `git_diff_find_similar` keeps a function of cache similarity metrics signatures, whose size depends on the number of deltas passed in via the `diff` parameter. In case where the diff is empty and thus doesn't have any deltas at all, we may end up allocating this cache via a call to `git__calloc(0, sizeof(void *))`. At least on AIX, allocating 0 bytes will result in a `NULL` pointer being returned, which causes us to erroneously return an OOM error. Fix this situation by simply returning early in case where we are being passed an empty diff, as we cannot find any similarities in that case anyway.
Patrick Steinhardt committed
-
- 06 Aug, 2018 8 commits
-
-
Edward Thomson committed
-
travis: do not execute Coverity analysis for all cron jobs
Edward Thomson committed -
ci: enable compilation with "-Werror"
Edward Thomson committed -
smart_pkt: fix potential OOB-read when processing ng packet
Patrick Steinhardt committed -
During the conversion of our CI scripts in bf418f09 (ci: refactor unix ci build/test scripts, 2018-07-14), we accidentally dropped the "-DENABLE_WERROR=ON" switch in our cmake invocation. Re-add it to help us catch compiler warnings early.
Patrick Steinhardt committed -
The `git_odb_stream` members `declared_size` and `received_bytes` are both of the type `git_off_t`, which we usually defined to be a 64 bit signed integer. Thus, passing these members to "PRIdZ" formatters is not correct, as they are not guaranteed to accept big enough numbers. Instead, use the "PRId64" formatter, which is able to represent 64 bit signed integers.
Patrick Steinhardt committed -
Fix a double-free in config parsing
Patrick Steinhardt committed -
The new Travis cron job gets executed daily, but our current configuration will cause each job to execute our Coverity script instead of the default build and testing scripts. This cannot work, as Coverity is heavily rate-limiting its API, so our cron builds are doomed to always fail. What we want to do instead is execute our normal builds, but add an additional Coverity jobs. This can easily be done by adding another Coverity-specific job with a conditional "type = cron", which sets the "COVERITY" environment variable. Instead of checking the build type, we then simply check whether "COVERITY" is set or not.
Patrick Steinhardt committed
-
- 05 Aug, 2018 5 commits
-
-
Nelson Elhage committed
-
Nelson Elhage committed
-
Our thread policies state that we cannot re-use the `git_repository` across threads. Our tests cannot deviate from that.
Edward Thomson committed -
Nelson Elhage committed
-
Nelson Elhage committed
-
- 04 Aug, 2018 1 commit
-
-
String operations in libgit2 are supposed to never receive `NULL`, e.g. they are not `NULL`-save. In the case of `git__linenlen()`, invocation with `NULL` leads to undefined behavior. In a `git_parse_ctx` however, the `content` field used in these operations was initialized to `NULL` if the `git_parse_ctx_init()` was called with `NULL` for `content` or `0` for `content_len`. For the latter case, the initialization function even contained some logic for initializing `content` with `NULL`. This commit mitigates triggering undefined behavior by rewriting the logic. Now `content` is always initialized to a non-null buffer. Instead of a null buffer, an empty string is used for denoting an empty buffer.
Julian Ganz committed
-
- 03 Aug, 2018 6 commits
-
-
Fuzzers
Patrick Steinhardt committed -
When using VSTS-based builds, we are in a different location than when doing Travis builds. Due to this, the relative path to our fuzzer corpora does not work on VSTS. Fix it by using `${SOURCE_DIR}` instead.
Patrick Steinhardt committed -
By default, libgit2 allows up to 2^32 objects when downloading a packfile from a remote. For each of these objects, libgit2 will allocate up to two small structs, which in total adds up to quite a lot of memory. As a result, our fuzzers might run out of memory rather quick in case where they receive as input a packfile with such a huge count of objects. Limit the packfile object count to 10M objects. This is sufficiently big to still work with most largish repos (linux.git has around 6M objects as of now), but small enough to not cause the fuzzer to OOM.
Patrick Steinhardt committed -
Convert the "download_refs" fuzzer from C++ to C. Rename the source file to have it be picked up by our build system.
Patrick Steinhardt committed -
This is a direct copy of the code from google/oss-fuzz, written by Nelson Elhage (@nelhage). Note that due to the ".cc" ending, the file will not yet be picked up by the build system. This is intended, as currently that file is partly written in C++, requiring a conversion to C.
Patrick Steinhardt committed -
The packfile_raw fuzzer is using some internal APIs from libgit2, which makes it hard to compile it as part of the oss-fuzz project. As oss-fuzz requires us to link against the C++ FuzzingEngine library, we cannot use "-DBUILD_FUZZERS=ON" directly but instead have to first compile an object from our fuzzers and then link against the C++ library. Compiling the fuzzer objects thus requires an external invocation of CC, and we certainly don't want to do further black magic by adding libgit2's private source directory to the header include path. To fix the issue, convert the code to not use any internal APIs. Besides some headers which we have to add now, this also requires us to change to the hashing function of the ODB. Note that this will change the hashing result, as we have previously not prepended the object header to the data that is to be hashed. But this shouldn't matter in practice, as we don't care for the hash value anyway.
Patrick Steinhardt committed
-