- 20 Jul, 2019 14 commits
-
-
On most platforms it's fine to create symlinks to nonexisting files. Not so on Windows, where the type of a symlink (file or directory) needs to be set at creation time. So depending on whether the target file exists or not, we may end up with different symlink types. This creates a problem when performing checkouts, where we simply iterate over all blobs that need to be updated without treating symlinks any special. If the target file of the symlink is going to be checked out after the symlink itself, then the symlink will be created as directory symlink and not as file symlink. Fix the issue by iterating over blobs twice: once to perform postponed deletions and updates to non-symlink blobs, and once to perform updates to symlink blobs.
Patrick Steinhardt committed -
When creating a symlink in Windows, one needs to tell Windows whether the symlink should be a file or directory symlink. To determine which flag to pass, we call `GetFileAttributesW` on the target file to see whether it is a directory and then pass the flag accordingly. The problem though is if create a symlink with a relative target path, then we will check that relative path while not necessarily being inside of the working directory where the symlink is to be created. Thus, getting its attributes will either fail or return attributes of the wrong target. Fix this by resolving the target path relative to the directory in which the symlink is to be created.
Patrick Steinhardt committed -
Add two more tests to verify that we're not deleting symlink targets, but the symlinks themselves. Furthermore, convert several `cl_skip`s on Win32 to conditional skips depending on whether the clar sandbox supports symlinks or not. Windows is grown up now and may allow unprivileged symlinks if the machine has been configured accordingly.
Patrick Steinhardt committed -
Several function calls to `p_stat` and `p_close` have no verification if they actually succeeded. As these functions _may_ fail and as we also want to make sure that we're not doing anything dumb, let's check them, too.
Patrick Steinhardt committed -
When deleting a symlink on Windows, then the way to delete it depends on whether it is a directory symlink or a file symlink. In the first case, we need to use `DeleteFile`, in the second `RemoveDirectory`. Right now, `p_unlink` will only ever try to use `DeleteFile`, though, and thus fail to remove directory symlinks. This mismatches how unlink(3P) is expected to behave, though, as it shall remove any symlink disregarding whether it is a file or directory symlink. In order to correctly unlink a symlink, we thus need to check what kind of file this is. If we were to first query file attributes of every file upon calling `p_unlink`, then this would penalize the common case though. Instead, we can try to first delete the file with `DeleteFile` and only if the error returned is `ERROR_ACCESS_DENIED` will we query file attributes and determine whether it is a directory symlink to use `RemoveDirectory` instead.
Patrick Steinhardt committed -
When initializing a repository, we need to check whether its working directory supports symlinks to correctly set the initial value of the "core.symlinks" config variable. The code to check the filesystem is reusable in other parts of our codebase, like for example in our tests to determine whether certain tests can be expected to succeed or not. Extract the code into a new function `git_path_supports_symlinks` to avoid duplicate implementations. Remove a duplicate implementation in the repo test helper code.
Patrick Steinhardt committed -
Our file utils functions all have a "futils" prefix, e.g. `git_futils_touch`. One would thus naturally guess that their definitions and implementation would live in files "futils.h" and "futils.c", respectively, but in fact they live in "fileops.h". Rename the files to match expectations.
Patrick Steinhardt committed -
patch_parse: fix segfault due to line containing static contents
Edward Thomson committed -
With commit dedf70ad (patch_parse: do not depend on parsed buffer's lifetime, 2019-07-05), all lines of the patch are allocated with `strdup` to make lifetime of the parsed patch independent of the buffer that is currently being parsed. In patch b0893282 (patch_parse: ensure valid patch output with EOFNL, 2019-07-11), we introduced another code location where we add lines to the parsed patch. But as that one was implemented via a separate pull request, it wasn't converted to use `strdup`, as well. As a consequence, we generate a segfault when trying to deallocate the potentially static buffer that's now in some of the lines. Use `git__strdup` to fix the issue.
Patrick Steinhardt committed -
ignore: fix determining whether a shorter pattern negates another
Edward Thomson committed -
patch_parse: handle missing newline indicator in old file
Edward Thomson committed -
patch_parse: do not depend on parsed buffer's lifetime
Edward Thomson committed -
sha1: fix compilation of WinHTTP backend
Edward Thomson committed -
repository: do not initialize HEAD if it's provided by templates
Edward Thomson committed
-
- 19 Jul, 2019 7 commits
-
-
When using templates to initialize a git repository, then git-init(1) will copy over all contents of the template directory. These will be preferred over the default ones created by git-init(1). While we mostly do the same, there is the exception of "HEAD". While we do copy over the template's HEAD file, afterwards we'll immediately re-initialize its contents with either the default "ref: refs/origin/master" or the init option's `initial_head` field. Let's fix the inconsistency with upstream git-init(1) by not overwriting the template HEAD, but only if the user hasn't set `opts.initial_head`. If the `initial_head` field has been supplied, we should use that indifferent from whether the template contained a HEAD file or not. Add tests to verify we correctly use the template directory's HEAD file and that `initial_head` overrides the template.
Patrick Steinhardt committed -
Update `git_repository_init_ext` to use our typical style of error handling. The function had multiple statements which didn't `goto out` immediately but instead deferred it to later calls combined with `if` statements.
Patrick Steinhardt committed -
The error handling in `git_repository_create_head` completely swallows all error codes. While probably not too much of a problem, this also violates our usual coding style. Refactor the code to use a local `error` variable with the typical `goto out` statements.
Patrick Steinhardt committed -
All tests in repo::template have a common pattern of first setting up templates, then settung up the repository that makes use of those templates via several init options. Refactor this pattern into two functions `setup_templates` and `setup_repo` that handle most of that logic to make it easier to spot what a test actually wants to check. Furthermore, this also refactors how we clean up after the tests. Previously, it was a combination of manually calling `cl_fixture_cleanup` and `cl_set_cleanup`, which really is kind of hard to read. This commit refactors this to instead provide the cleaning parameters in the setup functions. All cleanups are then performed in the suite's cleanup function.
Patrick Steinhardt committed -
The repo::template test suite makes use of quite a few local variables that could be consolidated. Do so to make the code easier to read.
Patrick Steinhardt committed -
There's quite a lot of supporting code for our templates and they are an obvious standalone feature. Thus, let's extract those tests into their own suite to also make refactoring of them easier.
Patrick Steinhardt committed -
configuration: cvar -> configmap
Patrick Steinhardt committed
-
- 18 Jul, 2019 12 commits
-
-
Evict cache items more efficiently
Patrick Steinhardt committed -
clar: fix suite count
Patrick Steinhardt committed -
With the introduction of data variants for suites, we started undercounting the number of suites as we didn't account for those that were executed twice. This was then adjusted to count the number of initializers instead, but this fails to account for suites without any initializers at all. Fix the suite count by counting either the number of initializers or, if there is no initializer, count it as a single suite, only.
Patrick Steinhardt committed -
Ignore VS2017 specific files and folders
Patrick Steinhardt committed -
Edward Thomson committed
-
`cvar` is an unhelpful name. Refactor its usage to `configmap` for more clarity.
Patrick Steinhardt committed -
gitattributes: ignore macros defined in subdirectories
Patrick Steinhardt committed -
We currently have no job that compiles libgit2 with the WinHTTP backend for SHA1. Due to this, a compile error has been introduced and not noticed for several months. Change the x86 MSVC job to use the HTTPS backend for SHA1. The x86 job was chosen with no particular reason.
Patrick Steinhardt committed -
In commit bbf034ab (hash: move `git_hash_prov` into Win32 backend, 2019-02-22), the `git_hash_prov`'s structure name has been removed in favour of its typedef'ed name. But as we have no CI that compiles with the WinHTTPS hashing backend right now, it wasn't noticed that the implementation that uses this struct wasn't changed correctly. Fix the struct type to make it compile again.
Patrick Steinhardt committed -
When selecting the SHA1 backend, we only include the respective C implementation of the selected backend. But since commit bd48bf3f (hash: introduce source files to break include circles, 2019-06-14), we have introduced separate headers and compilation units for all hashes. So by not including the headers, we may not honor them to compute whether a file needs to be recompiled and they also will not be displayed in IDEs. Add the header files to fix this problem.
Patrick Steinhardt committed -
When computing whether we need to store a negative pattern, we iterate through all previously known patterns and check whether the negative pattern undoes any of the previous ones. In doing so we call `wildmatch` and check it's return for any negative error values. If there was a negative return, we will abort and bubble up that error to the caller. In fact, this check for negative values stems from the time where we still used `fnmatch` instead of `wildmatch`. For `fnmatch`, negative values indicate a "real" error, while for `wildmatch` a negative value may be returned if the matching was prematurely aborted. A premature abort may for example also happen if the pattern matches a prefix of the haystack if the pattern is shorter. Returning an error in that case is the wrong thing to do. Fix the code to compare for equality with `WM_MATCH`, only. Negative values returned by `wildmatch` are perfectly fine and thus should be ignored. Add a test that verifies we do not see the error.
Patrick Steinhardt committed -
clar: correctly account for "data" suites when counting
Patrick Steinhardt committed
-
- 17 Jul, 2019 3 commits
-
-
Allocate memory more efficiently when packing objects
Edward Thomson committed -
When our object cache is full, we pick eight items (or the whole cache, if there are fewer) and evict them. For small cache sizes, this is fine, but when we're dealing with a large number of objects, we can repeatedly exhaust the cache and spend a large amount of time in git_oidmap_iterate trying to find items to evict. Instead, let's assume that if the cache gets full, we have a large number of objects that we're handling, and be more aggressive about evicting items. Let's remove one item for every 2048 items, but not less than 8. This causes us to scale our evictions in proportion to the size of the cache and significantly reduces the time we spend in git_oidmap_iterate. Before this change, a full pack of all the non-blob objects in the Linux repository took in excess of 30 minutes and spent 62.3% of total runtime in odb_read_1 and its children, and 44.3% of the time in git_oidmap_iterate. With this change, the same operation now takes 14 minutes and 44 seconds, and odb_read_1 accounts for only 35.9% of total time, whereas git_oidmap_iterate consists of 6.2%. Note that we do spend a little more time inflating objects and a decent amount more time in memcmp. However, overall, the time taken is significantly improved, and time in pack building is now dominated by git_delta_create_from_index (33.7%), which is what we would expect.
brian m. carlson committed -
The packbuilder code allocates memory in chunks. When it needs to allocate, it tries to add 1024 to the number of objects and multiply by 3/2. However, it actually multiplies by 1 instead, since it performs an integral division in the expression "3 / 2" and only then multiplies by the increased number of objects. The current behavior causes the code to waste massive amounts of time copying memory when it reallocates, causing inserting all non-blob objects in the Linux repository into a new pack to take some indeterminate time greater than 5 minutes instead of 52 seconds. Correct this error by first dividing by two, and only then multiplying by 3. We still check for overflow for the multiplication, which is the only part that can overflow. This appears to be the only place in the code base which has this problem.
brian m. carlson committed
-
- 16 Jul, 2019 1 commit
-
-
Failing to do that makes clar miss the last of the suites, as all duplicated "data" would have not been accounted for.
Etienne Samson committed
-
- 12 Jul, 2019 3 commits
-
-
Signed-off-by: Sven Strickroth <email@cs-ware.de>
Sven Strickroth committed -
fileops: fix creation of directory in filesystem root
Patrick Steinhardt committed -
Right now, we are unconditionally applying all macros found in a gitatttributes file. But quoting gitattributes(5): Custom macro attributes can be defined only in top-level gitattributes files ($GIT_DIR/info/attributes, the .gitattributes file at the top level of the working tree, or the global or system-wide gitattributes files), not in .gitattributes files in working tree subdirectories. The built-in macro attribute "binary" is equivalent to: So gitattribute files in subdirectories of the working tree may explicitly _not_ contain macro definitions, but we do not currently enforce this limitation. This patch introduces a new parameter to the gitattributes parser that tells whether macros are allowed in the current file or not. If set to `false`, we will still parse macros, but silently ignore them instead of adding them to the list of defined macros. Update all callers to correctly determine whether the to-be-parsed file may contain macros or not. Most importantly, when walking up the directory hierarchy, we will only set it to `true` once it reaches the root directory of the repo itself. Add a test that verifies that we are indeed not applying macros from subdirectories. Previous to these changes, the test would've failed.
Patrick Steinhardt committed
-