When parsing the patch image from a string, we split the string by newlines to get a line-based view of it. To split, we use `memchr` on the buffer and limit the buffer length by the original length provided by the caller. This works just fine for the first line, but for every subsequent line we need to actually subtract the amount of bytes that we have already read. The above issue can be easily triggered by having a source buffer with at least two lines, where the second line does _not_ end in a newline. Given a string "foo\nb", we have an original length of five bytes. After having extracted the first line, we will point to 'b' and again try to `memchr(p, '\n', 5)`, resulting in an out-of-bounds read of four bytes. Fix the issue by correctly subtracting the amount of bytes already read.
Name |
Last commit
|
Last update |
---|---|---|
.github | Loading commit data... | |
azure-pipelines | Loading commit data... | |
ci | Loading commit data... | |
cmake/Modules | Loading commit data... | |
deps | Loading commit data... | |
docs | Loading commit data... | |
examples | Loading commit data... | |
fuzzers | Loading commit data... | |
include | Loading commit data... | |
script | Loading commit data... | |
src | Loading commit data... | |
tests | Loading commit data... | |
.HEADER | Loading commit data... | |
.editorconfig | Loading commit data... | |
.gitattributes | Loading commit data... | |
.gitignore | Loading commit data... | |
.mailmap | Loading commit data... | |
AUTHORS | Loading commit data... | |
CMakeLists.txt | Loading commit data... | |
COPYING | Loading commit data... | |
README.md | Loading commit data... | |
api.docurium | Loading commit data... | |
azure-pipelines.yml | Loading commit data... | |
git.git-authors | Loading commit data... | |
libgit2.pc.in | Loading commit data... | |
libgit2_clar.supp | Loading commit data... | |
package.json | Loading commit data... |