1. 16 Jan, 2013 1 commit
  2. 15 Jan, 2013 2 commits
  3. 04 Jan, 2013 1 commit
  4. 05 Dec, 2012 1 commit
  5. 03 Dec, 2012 1 commit
  6. 30 Nov, 2012 2 commits
  7. 29 Nov, 2012 4 commits
  8. 28 Nov, 2012 2 commits
  9. 19 Nov, 2012 1 commit
  10. 14 Nov, 2012 1 commit
  11. 12 Nov, 2012 2 commits
  12. 07 Nov, 2012 1 commit
    • Add a contrib script for comparing the performance of two sets of · bff0e529
      compiler runs.
      
      Usage documentation is in the script.
      
      The script produces output of the form:
      
      $ compare_two_ftime_report_sets "Log0/*perf" "Log3/*perf" 
      
      Arithmetic sample for timevar log files
      "Log0/*perf"
      and selecting lines containing "TOTAL" with desired confidence 95 is 
      trial count is 4, mean is 443.022 (95% confidence in 440.234 to 445.811),
      std.deviation is 1.75264, std.error is 0.876322
      
      Arithmetic sample for timevar log files
      "Log3/*perf"
      and selecting lines containing "TOTAL" with desired confidence 95 is 
      trial count is 4, mean is 441.302 (95% confidence in 436.671 to 445.934),
      std.deviation is 2.91098, std.error is 1.45549
      
      The first sample appears to be 0.39% larger,
      with 60% confidence of being larger.
      To reach 95% confidence, you need roughly 14 trials,
      assuming the standard deviation is stable, which is iffy.
      
      Tested on x86_64 builds.
      
      
      Index: contrib/ChangeLog
      
      2012-11-05  Lawrence Crowl  <crowl@google.com>
      
      	* compare_two_ftime_report_sets: New.
      
      From-SVN: r193277
      Lawrence Crowl committed
  13. 02 Nov, 2012 1 commit
    • Add a new option --clean_build to validate_failures.py · b436bf38
      This is useful when you have two builds of the same compiler.  One with
      your changes.  The other one, a clean build at the same revision.
      Instead of using a manifest file, --clean_build will compare the
      results it gather from the patched build against those it gathers from
      the clean build.
      
      Usage
      
      $ cd /top/of/patched/gcc/bld
      $ validate_failures.py --clean_build=clean/bld-gcc
      Source directory: /usr/local/google/home/dnovillo/gcc/trunk
      Build target:     x86_64-unknown-linux-gnu
      Getting actual results from build directory .
              ./x86_64-unknown-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
              ./x86_64-unknown-linux-gnu/libffi/testsuite/libffi.sum
              ./x86_64-unknown-linux-gnu/libgomp/testsuite/libgomp.sum
              ./x86_64-unknown-linux-gnu/libgo/libgo.sum
              ./x86_64-unknown-linux-gnu/boehm-gc/testsuite/boehm-gc.sum
              ./x86_64-unknown-linux-gnu/libatomic/testsuite/libatomic.sum
              ./x86_64-unknown-linux-gnu/libmudflap/testsuite/libmudflap.sum
              ./x86_64-unknown-linux-gnu/libitm/testsuite/libitm.sum
              ./x86_64-unknown-linux-gnu/libjava/testsuite/libjava.sum
              ./gcc/testsuite/g++/g++.sum
              ./gcc/testsuite/gnat/gnat.sum
              ./gcc/testsuite/ada/acats/acats.sum
              ./gcc/testsuite/gcc/gcc.sum
              ./gcc/testsuite/gfortran/gfortran.sum
              ./gcc/testsuite/obj-c++/obj-c++.sum
              ./gcc/testsuite/go/go.sum
              ./gcc/testsuite/objc/objc.sum
      Getting actual results from build directory clean/bld-gcc
              clean/bld-gcc/x86_64-unknown-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libffi/testsuite/libffi.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libgomp/testsuite/libgomp.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libgo/libgo.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/boehm-gc/testsuite/boehm-gc.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libatomic/testsuite/libatomic.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libmudflap/testsuite/libmudflap.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libitm/testsuite/libitm.sum
              clean/bld-gcc/x86_64-unknown-linux-gnu/libjava/testsuite/libjava.sum
              clean/bld-gcc/gcc/testsuite/g++/g++.sum
              clean/bld-gcc/gcc/testsuite/gnat/gnat.sum
              clean/bld-gcc/gcc/testsuite/ada/acats/acats.sum
              clean/bld-gcc/gcc/testsuite/gcc/gcc.sum
              clean/bld-gcc/gcc/testsuite/gfortran/gfortran.sum
              clean/bld-gcc/gcc/testsuite/obj-c++/obj-c++.sum
              clean/bld-gcc/gcc/testsuite/go/go.sum
              clean/bld-gcc/gcc/testsuite/objc/objc.sum
      
      SUCCESS: No unexpected failures.
      
      2012-11-02  Diego Novillo  <dnovillo@google.com>
      
      	* testsuite-management/validate_failures.py: Add option
      	--clean_build to compare test results against another
      	build.
      
      From-SVN: r193105
      Diego Novillo committed
  14. 01 Nov, 2012 1 commit
    • This patch renames sbitmap iterators to unify them with the bitmap iterators. · d4ac4ce2
      Remove the unused EXECUTE_IF_SET_IN_SBITMAP_REV, which has an unconventional
      interface.
      
      Rename the sbitmap_iter_* functions to match bitmap's bmp_iter_* functions.
      Add an additional parameter to the initialization and next functions to
      match the interface in bmp_iter_*.  This extra parameter is mostly hidden
      by the use of the EXECUTE_IF macros.
      
      Rename the EXECUTE_IF_SET_IN_SBITMAP macro to EXECUTE_IF_SET_IN_BITMAP.  Its
      implementation is now identical to that in bitmap.h.  To prevent redefinition
      errors, both definitions are now guarded by #ifndef.  An alternate strategy
      is to simply include bitmap.h from sbitmap.h.  As this would increase build
      time, I have elected to use the #ifndef version.  I do not have a strong
      preference here.
      
      The sbitmap_iterator type is still distinctly named because it is often
      declared in contexts where the bitmap type is not obvious.  There are less
      than 40 uses of this type, so the burden to modify it when changing bitmap
      types is not large.
      
      Tested on x86-64, config-list.mk testing.
      
      
      Index: gcc/ChangeLog
      
      2012-10-31  Lawrence Crowl  <crowl@google.com>
      
      	* sbitmap.h (sbitmap_iter_init): Rename bmp_iter_set_init and add
      	unused parameter to match bitmap iterator.  Update callers.
      	(sbitmap_iter_cond): Rename bmp_iter_set.  Update callers.
      	(sbitmap_iter_next): Rename bmp_iter_next and add unused parameter to
      	match bitmap iterator.  Update callers.
      	(EXECUTE_IF_SET_IN_SBITMAP_REV): Remove unused.
      	(EXECUTE_IF_SET_IN_SBITMAP): Rename EXECUTE_IF_SET_IN_BITMAP and
      	adjust to be identical to the definition in bitmap.h.  Conditionalize
      	the definition based on not having been defined.  Update callers.
      	* bitmap.h (EXECUTE_IF_SET_IN_BITMAP): Conditionalize the definition
      	based on not having been defined.  (To match the above.)
      
      From-SVN: r193069
      Lawrence Crowl committed
  15. 31 Oct, 2012 1 commit
  16. 29 Oct, 2012 1 commit
  17. 06 Oct, 2012 1 commit
  18. 02 Oct, 2012 1 commit
  19. 26 Sep, 2012 1 commit
  20. 11 Sep, 2012 1 commit
  21. 04 Sep, 2012 2 commits
  22. 26 Aug, 2012 2 commits
  23. 15 Aug, 2012 1 commit
    • Add an xfail manifest for x86_64-unknown-linux-gnu to trunk. · 18da4303
      This patch adds an xfail manifest for trunk for x86_64 builds. I find
      this useful to determine whether my patch has introduced new failures.
      The failures in these manifest are always present in trunk and
      deciding what to ignore is not very straightforward.
      
      I will keep maintaining this manifest out of clean builds. They are
      not hard to maintain. Manifest files can be generated by going to the
      top of the build directory and typing:
      
      $ cd <top-of-bld-dir>
      $ <path-to-src>/contrib/testsuite-management --produce_manifest
      
      This will generate a .xfail file with the triple name of the target
      you just built.  Once this file exist you can run the validator again
      on the build directory with no arguments.  It should produce the
      output:
      
      $ cd <top-of-bld-dir>
      $ <path-to-src>/contrib/testsuite-management/validate_failures.py
      Source directory: <path-to-src>
      Build target:     x86_64-unknown-linux-gnu
      Manifest:         <path-to-src>/contrib/testsuite-management/x86_64-unknown-linux-gnu.xfail
      Getting actual results from build
              ./x86_64-unknown-linux-gnu/libstdc++-v3/testsuite/libstdc++.sum
              ./x86_64-unknown-linux-gnu/libffi/testsuite/libffi.sum
              ./x86_64-unknown-linux-gnu/libgomp/testsuite/libgomp.sum
              ./x86_64-unknown-linux-gnu/libgo/libgo.sum
              ./x86_64-unknown-linux-gnu/boehm-gc/testsuite/boehm-gc.sum
              ./x86_64-unknown-linux-gnu/libatomic/testsuite/libatomic.sum
              ./x86_64-unknown-linux-gnu/libmudflap/testsuite/libmudflap.sum
              ./x86_64-unknown-linux-gnu/libitm/testsuite/libitm.sum
              ./x86_64-unknown-linux-gnu/libjava/testsuite/libjava.sum
              ./gcc/testsuite/g++/g++.sum
              ./gcc/testsuite/gnat/gnat.sum
              ./gcc/testsuite/ada/acats/acats.sum
              ./gcc/testsuite/gcc/gcc.sum
              ./gcc/testsuite/gfortran/gfortran.sum
              ./gcc/testsuite/obj-c++/obj-c++.sum
              ./gcc/testsuite/go/go.sum
              ./gcc/testsuite/objc/objc.sum
      
      
      SUCCESS: No unexpected failures.
      
      
      If the output shows new failures, you investigate them. If they are
      not yours, you can add them to the xfail manifest (after reporting
      them) and then commit the modified .xfail file.
      
      Long term, I would like to have this script pull manifest files from
      postings made to gcc-testresults. This way, we won't have to maintain
      these .xfail files manually. In branches this is not a big problem,
      but in trunk it may be a tad annoying.
      
      From-SVN: r190404
      Diego Novillo committed
  24. 13 Aug, 2012 3 commits
  25. 26 Jul, 2012 1 commit
  26. 19 Jul, 2012 1 commit
    • Fix --produce_manifest flag in validate_failures.py. · 29476fe1
      When I added the functionality to use other summary files for
      reporting, I broke the generation of manifests. When sum files are
      passed on, we need to use the list. Otherwise, we need to find summary
      files in the build directory.
      
      This patch factors out that logic into a new function and calls it
      from both the reporting and generation routines.
      
      Committed to trunk.
      
      	* testsuite-management/validate_failures.py (CollectSumFiles):
      	Rename from GetSumFiles.
      	(GetSumFiles): Factor out of CheckExpectedResults.
      	(CheckExpectedResults): Call it.
      	(ProduceManifest): Call it.
      
      From-SVN: r189662
      Diego Novillo committed
  27. 18 Jul, 2012 1 commit
  28. 19 Jun, 2012 1 commit
  29. 06 Jun, 2012 1 commit