Commit 6a2ba183 by Aldy Hernandez Committed by Aldy Hernandez

libgomp.texi: Fix spelling and pasto problems throughout.

        * libgomp.texi: Fix spelling and pasto problems throughout.
        Adjust prototypes to match code.

From-SVN: r162538
parent edc74207
2010-07-26 Aldy Hernandez <aldyh@redhat.com>
* libgomp.texi: Fix spelling and pasto problems throughout.
Adjust prototypes to match code.
2010-07-24 Tobias Burnus <burnus@net-b.de> 2010-07-24 Tobias Burnus <burnus@net-b.de>
* testsuite/libgomp.fortran/appendix-a/a.28.5.f90: Add -w to * testsuite/libgomp.fortran/appendix-a/a.28.5.f90: Add -w to
......
...@@ -137,14 +137,14 @@ Control threads, processors and the parallel environment. ...@@ -137,14 +137,14 @@ Control threads, processors and the parallel environment.
* omp_get_ancestor_thread_num:: Ancestor thread ID * omp_get_ancestor_thread_num:: Ancestor thread ID
* omp_get_dynamic:: Dynamic teams setting * omp_get_dynamic:: Dynamic teams setting
* omp_get_level:: Number of parallel regions * omp_get_level:: Number of parallel regions
* omp_get_max_active_levels:: Maximal number of active regions * omp_get_max_active_levels:: Maximum number of active regions
* omp_get_max_threads:: Maximal number of threads of parallel region * omp_get_max_threads:: Maximum number of threads of parallel region
* omp_get_nested:: Nested parallel regions * omp_get_nested:: Nested parallel regions
* omp_get_num_procs:: Number of processors online * omp_get_num_procs:: Number of processors online
* omp_get_num_threads:: Size of the active team * omp_get_num_threads:: Size of the active team
* omp_get_schedule:: Obtain the runtime scheduling method * omp_get_schedule:: Obtain the runtime scheduling method
* omp_get_team_size:: Number of threads in a team * omp_get_team_size:: Number of threads in a team
* omp_get_thread_limit:: Maximal number of threads * omp_get_thread_limit:: Maximum number of threads
* omp_get_thread_num:: Current thread ID * omp_get_thread_num:: Current thread ID
* omp_in_parallel:: Whether a parallel region is active * omp_in_parallel:: Whether a parallel region is active
* omp_set_dynamic:: Enable/disable dynamic teams * omp_set_dynamic:: Enable/disable dynamic teams
...@@ -187,7 +187,7 @@ which enclose the calling call. ...@@ -187,7 +187,7 @@ which enclose the calling call.
@item @emph{C/C++} @item @emph{C/C++}
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_active_level();} @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -220,7 +220,7 @@ zero to @code{omp_get_level} -1 is returned; if @var{level} is ...@@ -220,7 +220,7 @@ zero to @code{omp_get_level} -1 is returned; if @var{level} is
@item @emph{Fortran}: @item @emph{Fortran}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Interface}: @tab @code{integer omp_ancestor_thread_num(level)} @item @emph{Interface}: @tab @code{integer omp_get_ancestor_thread_num(level)}
@item @tab @code{integer level} @item @tab @code{integer level}
@end multitable @end multitable
...@@ -248,7 +248,7 @@ disabled by default. ...@@ -248,7 +248,7 @@ disabled by default.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_dynamic();} @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -274,7 +274,7 @@ which enclose the calling call. ...@@ -274,7 +274,7 @@ which enclose the calling call.
@item @emph{C/C++} @item @emph{C/C++}
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get level();} @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -292,14 +292,14 @@ which enclose the calling call. ...@@ -292,14 +292,14 @@ which enclose the calling call.
@node omp_get_max_active_levels @node omp_get_max_active_levels
@section @code{omp_set_max_active_levels} -- Maximal number of active regions @section @code{omp_get_max_active_levels} -- Maximum number of active regions
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
This function obtains the maximally allowed number of nested, active parallel regions. This function obtains the maximum allowed number of nested, active parallel regions.
@item @emph{C/C++} @item @emph{C/C++}
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels();} @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -317,15 +317,15 @@ This function obtains the maximally allowed number of nested, active parallel re ...@@ -317,15 +317,15 @@ This function obtains the maximally allowed number of nested, active parallel re
@node omp_get_max_threads @node omp_get_max_threads
@section @code{omp_get_max_threads} -- Maximal number of threads of parallel region @section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Return the maximal number of threads used for the current parallel region Return the maximum number of threads used for the current parallel region
that does not use the clause @code{num_threads}. that does not use the clause @code{num_threads}.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_max_threads();} @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -357,7 +357,7 @@ disabled by default. ...@@ -357,7 +357,7 @@ disabled by default.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_nested();} @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -382,7 +382,7 @@ Returns the number of processors online. ...@@ -382,7 +382,7 @@ Returns the number of processors online.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_num_procs();} @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -400,7 +400,7 @@ Returns the number of processors online. ...@@ -400,7 +400,7 @@ Returns the number of processors online.
@section @code{omp_get_num_threads} -- Size of the active team @section @code{omp_get_num_threads} -- Size of the active team
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
The number of threads in the current team. In a sequential section of Returns the number of threads in the current team. In a sequential section of
the program @code{omp_get_num_threads} returns 1. the program @code{omp_get_num_threads} returns 1.
The default team size may be initialized at startup by the The default team size may be initialized at startup by the
...@@ -412,7 +412,7 @@ one thread per CPU online is used. ...@@ -412,7 +412,7 @@ one thread per CPU online is used.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_num_threads();} @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -433,14 +433,14 @@ one thread per CPU online is used. ...@@ -433,14 +433,14 @@ one thread per CPU online is used.
@section @code{omp_get_schedule} -- Obtain the runtime scheduling method @section @code{omp_get_schedule} -- Obtain the runtime scheduling method
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Obtain runtime the scheduling method. The @var{kind} argument will be Obtain the runtime scheduling method. The @var{kind} argument will be
set to the value @code{omp_sched_static}, @code{omp_sched_dynamic}, set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
@code{opm_sched_guided} or @code{auto}. The second argument, @var{modifier}, @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
is set to the chunk size. @var{modifier}, is set to the chunk size.
@item @emph{C/C++} @item @emph{C/C++}
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{omp_schedule(omp_sched_t * kind, int *modifier);} @item @emph{Prototype}: @tab @code{omp_schedule(omp_sched_t *kind, int *modifier);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -465,13 +465,13 @@ is set to the chunk size. ...@@ -465,13 +465,13 @@ is set to the chunk size.
@item @emph{Description}: @item @emph{Description}:
This function returns the number of threads in a thread team to which This function returns the number of threads in a thread team to which
either the current thread or its ancestor belongs. For values of @var{level} either the current thread or its ancestor belongs. For values of @var{level}
outside zero to @code{omp_get_level} -1 is returned; if @var{level} is zero outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
1 is returned and for @code{omp_get_level} the result is identical 1 is returned, and for @code{omp_get_level}, the result is identical
to @code{omp_get_num_threads}. to @code{omp_get_num_threads}.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_time_size(int level);} @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -490,14 +490,14 @@ to @code{omp_get_num_threads}. ...@@ -490,14 +490,14 @@ to @code{omp_get_num_threads}.
@node omp_get_thread_limit @node omp_get_thread_limit
@section @code{omp_get_thread_limit} -- Maximal number of threads @section @code{omp_get_thread_limit} -- Maximum number of threads
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Return the maximal number of threads of the program. Return the maximum number of threads of the program.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_thread_limit();} @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -518,7 +518,7 @@ Return the maximal number of threads of the program. ...@@ -518,7 +518,7 @@ Return the maximal number of threads of the program.
@section @code{omp_get_thread_num} -- Current thread ID @section @code{omp_get_thread_num} -- Current thread ID
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Unique thread identification number within the current team. Returns a unique thread identification number within the current team.
In a sequential parts of the program, @code{omp_get_thread_num} In a sequential parts of the program, @code{omp_get_thread_num}
always returns 0. In parallel regions the return value varies always returns 0. In parallel regions the return value varies
from 0 to @code{omp_get_num_threads}-1 inclusive. The return from 0 to @code{omp_get_num_threads}-1 inclusive. The return
...@@ -526,7 +526,7 @@ value of the master thread of a team is always 0. ...@@ -526,7 +526,7 @@ value of the master thread of a team is always 0.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_get_thread_num();} @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -553,7 +553,7 @@ their language-specific counterparts. ...@@ -553,7 +553,7 @@ their language-specific counterparts.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_in_parallel();} @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -577,7 +577,7 @@ adjustment of team sizes and @code{false} disables it. ...@@ -577,7 +577,7 @@ adjustment of team sizes and @code{false} disables it.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int);} @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int set);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -599,16 +599,17 @@ adjustment of team sizes and @code{false} disables it. ...@@ -599,16 +599,17 @@ adjustment of team sizes and @code{false} disables it.
@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions @section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
This function limits the maximally allowed number of nested, active parallel regions. This function limits the maximum allowed number of nested, active
parallel regions.
@item @emph{C/C++} @item @emph{C/C++}
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{omp_set_max_active_levels(int max_levels);} @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Interface}: @tab @code{omp_max_active_levels(max_levels)} @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
@item @tab @code{integer max_levels} @item @tab @code{integer max_levels}
@end multitable @end multitable
...@@ -632,12 +633,12 @@ dynamic adjustment of team sizes and @code{false} disables it. ...@@ -632,12 +633,12 @@ dynamic adjustment of team sizes and @code{false} disables it.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int);} @item @emph{Prototype}: @tab @code{void omp_set_nested(int set);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(set)} @item @emph{Interface}: @tab @code{subroutine omp_set_nested(set)}
@item @tab @code{integer, intent(in) :: set} @item @tab @code{integer, intent(in) :: set}
@end multitable @end multitable
...@@ -660,13 +661,13 @@ argument of @code{omp_set_num_threads} shall be a positive integer. ...@@ -660,13 +661,13 @@ argument of @code{omp_set_num_threads} shall be a positive integer.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int);} @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int n);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(set)} @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(n)}
@item @tab @code{integer, intent(in) :: set} @item @tab @code{integer, intent(in) :: n}
@end multitable @end multitable
@item @emph{See also}: @item @emph{See also}:
...@@ -684,19 +685,19 @@ argument of @code{omp_set_num_threads} shall be a positive integer. ...@@ -684,19 +685,19 @@ argument of @code{omp_set_num_threads} shall be a positive integer.
@item @emph{Description}: @item @emph{Description}:
Sets the runtime scheduling method. The @var{kind} argument can have the Sets the runtime scheduling method. The @var{kind} argument can have the
value @code{omp_sched_static}, @code{omp_sched_dynamic}, value @code{omp_sched_static}, @code{omp_sched_dynamic},
@code{opm_sched_guided} or @code{omp_sched_auto}. Except for @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
@code{omp_sched_auto}, the chunk size is set to the value of @code{omp_sched_auto}, the chunk size is set to the value of
@var{modifier} if positive or to the default value if zero or negative. @var{modifier} if positive, or to the default value if zero or negative.
For @code{omp_sched_auto} the @var{modifier} argument is ignored. For @code{omp_sched_auto} the @var{modifier} argument is ignored.
@item @emph{C/C++} @item @emph{C/C++}
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{int omp_schedule(omp_sched_t * kind, int *modifier);} @item @emph{Prototype}: @tab @code{int omp_set_schedule(omp_sched_t *kind, int *modifier);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Interface}: @tab @code{subroutine omp_schedule(kind, modifier)} @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, modifier)}
@item @tab @code{integer(kind=omp_sched_kind) kind} @item @tab @code{integer(kind=omp_sched_kind) kind}
@item @tab @code{integer modifier} @item @tab @code{integer modifier}
@end multitable @end multitable
...@@ -715,7 +716,7 @@ For @code{omp_sched_auto} the @var{modifier} argument is ignored. ...@@ -715,7 +716,7 @@ For @code{omp_sched_auto} the @var{modifier} argument is ignored.
@section @code{omp_init_lock} -- Initialize simple lock @section @code{omp_init_lock} -- Initialize simple lock
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Initialize a simple lock. After initialization, the lock is in Initialize a simple lock. After initialization, the lock is in
an unlocked state. an unlocked state.
@item @emph{C/C++}: @item @emph{C/C++}:
...@@ -805,7 +806,7 @@ does not block if the lock is not available. This function returns ...@@ -805,7 +806,7 @@ does not block if the lock is not available. This function returns
A simple lock about to be unset must have been locked by @code{omp_set_lock} A simple lock about to be unset must have been locked by @code{omp_set_lock}
or @code{omp_test_lock} before. In addition, the lock must be held by the or @code{omp_test_lock} before. In addition, the lock must be held by the
thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
ore more threads attempted to set the lock before, one of them is chosen to, or more threads attempted to set the lock before, one of them is chosen to,
again, set the lock for itself. again, set the lock for itself.
@item @emph{C/C++}: @item @emph{C/C++}:
...@@ -837,7 +838,7 @@ in the unlocked state. ...@@ -837,7 +838,7 @@ in the unlocked state.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *);} @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -859,7 +860,7 @@ in the unlocked state. ...@@ -859,7 +860,7 @@ in the unlocked state.
@section @code{omp_init_nest_lock} -- Initialize nested lock @section @code{omp_init_nest_lock} -- Initialize nested lock
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Initialize a nested lock. After initialization, the lock is in Initialize a nested lock. After initialization, the lock is in
an unlocked state and the nesting count is set to zero. an unlocked state and the nesting count is set to zero.
@item @emph{C/C++}: @item @emph{C/C++}:
...@@ -882,7 +883,7 @@ an unlocked state and the nesting count is set to zero. ...@@ -882,7 +883,7 @@ an unlocked state and the nesting count is set to zero.
@node omp_set_nest_lock @node omp_set_nest_lock
@section @code{omp_set_nest_lock} -- Wait for and set simple lock @section @code{omp_set_nest_lock} -- Wait for and set nested lock
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Before setting a nested lock, the lock variable must be initialized by Before setting a nested lock, the lock variable must be initialized by
...@@ -1008,7 +1009,7 @@ successive clock ticks. ...@@ -1008,7 +1009,7 @@ successive clock ticks.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{double omp_get_wtick();} @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -1030,13 +1031,13 @@ successive clock ticks. ...@@ -1030,13 +1031,13 @@ successive clock ticks.
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Elapsed wall clock time in seconds. The time is measured per thread, no Elapsed wall clock time in seconds. The time is measured per thread, no
guarantee can bee made that two distinct threads measure the same time. guarantee can be made that two distinct threads measure the same time.
Time is measured from some "time in the past". On POSIX compliant systems Time is measured from some "time in the past". On POSIX compliant systems
the seconds since the Epoch (00:00:00 UTC, January 1, 1970) are returned. the seconds since the Epoch (00:00:00 UTC, January 1, 1970) are returned.
@item @emph{C/C++}: @item @emph{C/C++}:
@multitable @columnfractions .20 .80 @multitable @columnfractions .20 .80
@item @emph{Prototype}: @tab @code{double omp_get_wtime();} @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
@end multitable @end multitable
@item @emph{Fortran}: @item @emph{Fortran}:
...@@ -1069,12 +1070,12 @@ extensions. ...@@ -1069,12 +1070,12 @@ extensions.
@menu @menu
* OMP_DYNAMIC:: Dynamic adjustment of threads * OMP_DYNAMIC:: Dynamic adjustment of threads
* OMP_MAX_ACTIVE_LEVELS:: Set the maximal number of nested parallel regions * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
* OMP_NESTED:: Nested parallel regions * OMP_NESTED:: Nested parallel regions
* OMP_NUM_THREADS:: Specifies the number of threads to use * OMP_NUM_THREADS:: Specifies the number of threads to use
* OMP_STACKSIZE:: Set default thread stack size * OMP_STACKSIZE:: Set default thread stack size
* OMP_SCHEDULE:: How threads are scheduled * OMP_SCHEDULE:: How threads are scheduled
* OMP_THREAD_LIMIT:: Set the maximal number of threads * OMP_THREAD_LIMIT:: Set the maximum number of threads
* OMP_WAIT_POLICY:: How waiting threads are handled * OMP_WAIT_POLICY:: How waiting threads are handled
* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
* GOMP_STACKSIZE:: Set default thread stack size * GOMP_STACKSIZE:: Set default thread stack size
...@@ -1101,11 +1102,11 @@ disabled by default. ...@@ -1101,11 +1102,11 @@ disabled by default.
@node OMP_MAX_ACTIVE_LEVELS @node OMP_MAX_ACTIVE_LEVELS
@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximal number of nested parallel regions @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
@cindex Environment Variable @cindex Environment Variable
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Specifies the initial value for the maximal number of nested parallel Specifies the initial value for the maximum number of nested parallel
regions. The value of this variable shall be positive integer. regions. The value of this variable shall be positive integer.
If undefined, the number of active levels is unlimited. If undefined, the number of active levels is unlimited.
...@@ -1145,8 +1146,8 @@ regions are disabled by default. ...@@ -1145,8 +1146,8 @@ regions are disabled by default.
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Specifies the default number of threads to use in parallel regions. The Specifies the default number of threads to use in parallel regions. The
value of this variable shall be positive integer. If undefined one thread value of this variable shall be a positive integer. If undefined one thread
per CPU online is used. per CPU is used.
@item @emph{See also}: @item @emph{See also}:
@ref{omp_set_num_threads} @ref{omp_set_num_threads}
...@@ -1187,7 +1188,7 @@ Set the default thread stack size in kilobytes, unless the number ...@@ -1187,7 +1188,7 @@ Set the default thread stack size in kilobytes, unless the number
is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
case the size is, respectively, in bytes, kilobytes, megabytes case the size is, respectively, in bytes, kilobytes, megabytes
or gigabytes. This is different from @code{pthread_attr_setstacksize} or gigabytes. This is different from @code{pthread_attr_setstacksize}
which gets the number of bytes as an argument. If the stacksize can not which gets the number of bytes as an argument. If the stacksize cannot
be set due to system constraints, an error is reported and the initial be set due to system constraints, an error is reported and the initial
stacksize is left unchanged. If undefined, the stack size is system stacksize is left unchanged. If undefined, the stack size is system
dependent. dependent.
...@@ -1199,12 +1200,12 @@ dependent. ...@@ -1199,12 +1200,12 @@ dependent.
@node OMP_THREAD_LIMIT @node OMP_THREAD_LIMIT
@section @env{OMP_THREAD_LIMIT} -- Set the maximal number of threads @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
@cindex Environment Variable @cindex Environment Variable
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Specifies the number of threads to use for the whole program. The Specifies the number of threads to use for the whole program. The
value of this variable shall be positive integer. If undefined, value of this variable shall be a positive integer. If undefined,
the number of threads is not limited. the number of threads is not limited.
@item @emph{See also}: @item @emph{See also}:
...@@ -1238,15 +1239,15 @@ they should. ...@@ -1238,15 +1239,15 @@ they should.
@cindex Environment Variable @cindex Environment Variable
@table @asis @table @asis
@item @emph{Description}: @item @emph{Description}:
Binds threads to specific CPUs. The variable should contain a space- or Binds threads to specific CPUs. The variable should contain a space-separated
comma-separated list of CPUs. This list may contain different kind of or comma-separated list of CPUs. This list may contain different kinds of
entries: either single CPU numbers in any order, a range of CPUs (M-N) entries: either single CPU numbers in any order, a range of CPUs (M-N)
or a range with some stride (M-N:S). CPU numbers are zero based. For example, or a range with some stride (M-N:S). CPU numbers are zero based. For example,
@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12, CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
and 14 respectively and then start assigning back from the beginning of and 14 respectively and then start assigning back from the beginning of
the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0. the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
There is no GNU OpenMP library routine to determine whether a CPU affinity There is no GNU OpenMP library routine to determine whether a CPU affinity
specification is in effect. As a workaround, language-specific library specification is in effect. As a workaround, language-specific library
...@@ -1269,7 +1270,7 @@ assignment of threads to CPUs. ...@@ -1269,7 +1270,7 @@ assignment of threads to CPUs.
@item @emph{Description}: @item @emph{Description}:
Set the default thread stack size in kilobytes. This is different from Set the default thread stack size in kilobytes. This is different from
@code{pthread_attr_setstacksize} which gets the number of bytes as an @code{pthread_attr_setstacksize} which gets the number of bytes as an
argument. If the stacksize can not be set due to system constraints, an argument. If the stacksize cannot be set due to system constraints, an
error is reported and the initial stacksize is left unchanged. If undefined, error is reported and the initial stacksize is left unchanged. If undefined,
the stack size is system dependent. the stack size is system dependent.
...@@ -1293,7 +1294,7 @@ GCC Patches Mailinglist} ...@@ -1293,7 +1294,7 @@ GCC Patches Mailinglist}
@chapter The libgomp ABI @chapter The libgomp ABI
The following sections present notes on the external ABI as The following sections present notes on the external ABI as
presented by libgomp. Only maintainers should need them. presented by libgomp. Only maintainers should need them.
@menu @menu
* Implementing MASTER construct:: * Implementing MASTER construct::
...@@ -1323,7 +1324,7 @@ if (omp_get_thread_num () == 0) ...@@ -1323,7 +1324,7 @@ if (omp_get_thread_num () == 0)
Alternately, we generate two copies of the parallel subfunction Alternately, we generate two copies of the parallel subfunction
and only include this in the version run by the master thread. and only include this in the version run by the master thread.
Surely that's not worthwhile though... Surely this is not worthwhile though...
...@@ -1348,7 +1349,7 @@ name being transformed into a variable declared like ...@@ -1348,7 +1349,7 @@ name being transformed into a variable declared like
@end smallexample @end smallexample
Ideally the ABI would specify that all zero is a valid unlocked Ideally the ABI would specify that all zero is a valid unlocked
state, and so we wouldn't actually need to initialize this at state, and so we wouldn't need to initialize this at
startup. startup.
...@@ -1415,14 +1416,14 @@ the semantic of new variable creation. ...@@ -1415,14 +1416,14 @@ the semantic of new variable creation.
@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
Seems simple enough for PARALLEL blocks. Create a private This seems simple enough for PARALLEL blocks. Create a private
struct for communicating between parent and subfunction. struct for communicating between the parent and subfunction.
In the parent, copy in values for scalar and "small" structs; In the parent, copy in values for scalar and "small" structs;
copy in addresses for others TREE_ADDRESSABLE types. In the copy in addresses for others TREE_ADDRESSABLE types. In the
subfunction, copy the value into the local variable. subfunction, copy the value into the local variable.
Not clear at all what to do with bare FOR or SECTION blocks. It is not clear what to do with bare FOR or SECTION blocks.
The only thing I can figure is that we do something like The only thing I can figure is that we do something like:
@smallexample @smallexample
#pragma omp for firstprivate(x) lastprivate(y) #pragma omp for firstprivate(x) lastprivate(y)
...@@ -1459,7 +1460,7 @@ broadcast would have to happen via SINGLE machinery instead. ...@@ -1459,7 +1460,7 @@ broadcast would have to happen via SINGLE machinery instead.
The private struct mentioned in the previous section should have The private struct mentioned in the previous section should have
a pointer to an array of the type of the variable, indexed by the a pointer to an array of the type of the variable, indexed by the
thread's @var{team_id}. The thread stores its final value into the thread's @var{team_id}. The thread stores its final value into the
array, and after the barrier the master thread iterates over the array, and after the barrier, the master thread iterates over the
array to collect the values. array to collect the values.
...@@ -1564,7 +1565,7 @@ becomes ...@@ -1564,7 +1565,7 @@ becomes
@} @}
@end smallexample @end smallexample
Note that while it looks like there is trickyness to propagating Note that while it looks like there is trickiness to propagating
a non-constant STEP, there isn't really. We're explicitly allowed a non-constant STEP, there isn't really. We're explicitly allowed
to evaluate it as many times as we want, and any variables involved to evaluate it as many times as we want, and any variables involved
should automatically be handled as PRIVATE or SHARED like any other should automatically be handled as PRIVATE or SHARED like any other
...@@ -1682,7 +1683,7 @@ becomes ...@@ -1682,7 +1683,7 @@ becomes
@chapter Reporting Bugs @chapter Reporting Bugs
Bugs in the GNU OpenMP implementation should be reported via Bugs in the GNU OpenMP implementation should be reported via
@uref{http://gcc.gnu.org/bugzilla/, bugzilla}. In all cases, please add @uref{http://gcc.gnu.org/bugzilla/, bugzilla}. For all cases, please add
"openmp" to the keywords field in the bug report. "openmp" to the keywords field in the bug report.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment