Commit 8bfd0a46 by Benjamin Kosnik Committed by Benjamin Kosnik

re PR libstdc++/16614 (Excessive resource usage in __mt_alloc)


2004-09-01  Benjamin Kosnik  <bkoz@redhat.com>

	PR libstdc++/16614
	* include/ext/mt_allocator.h (__mt_base): Not type dependent,
	split into..
	(__pool): New, specialize.
	(__common_pool): New, static bits here.
	(__per_type_pool): New, and here.
	(__mt_alloc_base): New.
	(__mt_alloc): Add template parameter, inherit from it.
	* src/allocator.cc: Split this...
	* src/allocator-inst.cc: And this...
	* src/pool_allocator.cc: ...into this.
	* src/mt_allocator.cc: ... and this. Add definitions for
	__mt_base.
	* src/Makefile.am (sources): Split allocator.cc to
	pool_allocator.cc and mt_allocator.cc.
	* src/Makefile.in: Regenerate.
	* config/linker-map.gnu: Add symbols.
	* docs/html/ext/mt_allocator.html: Document new design.
	* testsuite/ext/mt_allocator/tune-1.cc: New.
	* testsuite/ext/mt_allocator/tune-2.cc: New.
	* testsuite/ext/mt_allocator/tune-3.cc: New.
	* testsuite/ext/mt_allocator/tune-4.cc: New.

	* testsuite/testsuite_allocator.h (__gnu_test::check_new): New.
	* testsuite/ext/allocators.cc: Use check_new, split into...
	* testsuite/ext/mt_allocator/check_new.cc: this.
	* testsuite/ext/pool_allocator/check_new.cc: this.
	* testsuite/ext/malloc_allocator/check_new.cc: this.
	* testsuite/ext/debug_allocator/check_new.cc: this.
	* testsuite/ext/mt_allocator/instantiate.cc: this.
	* testsuite/ext/pool_allocator/instantiate.cc: this.
	* testsuite/ext/malloc_allocator/instantiate.cc: this.
	* testsuite/ext/debug_allocator/instantiate.cc: this.

From-SVN: r86936
parent 0705d602
2004-09-01 Benjamin Kosnik <bkoz@redhat.com>
PR libstdc++/16614
* include/ext/mt_allocator.h (__mt_base): Not type dependent,
split into..
(__pool): New, specialize.
(__common_pool): New, static bits here.
(__per_type_pool): New, and here.
(__mt_alloc_base): New.
(__mt_alloc): Add template parameter, inherit from it.
* src/allocator.cc: Split this...
* src/allocator-inst.cc: And this...
* src/pool_allocator.cc: ...into this.
* src/mt_allocator.cc: ... and this. Add definitions for
__mt_base.
* src/Makefile.am (sources): Split allocator.cc to
pool_allocator.cc and mt_allocator.cc.
* src/Makefile.in: Regenerate.
* config/linker-map.gnu: Add symbols.
* docs/html/ext/mt_allocator.html: Document new design.
* testsuite/ext/mt_allocator/tune-1.cc: New.
* testsuite/ext/mt_allocator/tune-2.cc: New.
* testsuite/ext/mt_allocator/tune-3.cc: New.
* testsuite/ext/mt_allocator/tune-4.cc: New.
* testsuite/testsuite_allocator.h (__gnu_test::check_new): New.
* testsuite/ext/allocators.cc: Use check_new, split into...
* testsuite/ext/mt_allocator/check_new.cc: this.
* testsuite/ext/pool_allocator/check_new.cc: this.
* testsuite/ext/malloc_allocator/check_new.cc: this.
* testsuite/ext/debug_allocator/check_new.cc: this.
* testsuite/ext/mt_allocator/instantiate.cc: this.
* testsuite/ext/pool_allocator/instantiate.cc: this.
* testsuite/ext/malloc_allocator/instantiate.cc: this.
* testsuite/ext/debug_allocator/instantiate.cc: this.
2004-08-30 Phil Edwards <phil@codesourcery.com> 2004-08-30 Phil Edwards <phil@codesourcery.com>
* docs/html/install.html: Update locales list (from Paolo). * docs/html/install.html: Update locales list (from Paolo).
......
...@@ -255,10 +255,20 @@ GLIBCXX_3.4.2 { ...@@ -255,10 +255,20 @@ GLIBCXX_3.4.2 {
_ZN9__gnu_cxx18stdio_sync_filebufI[cw]St11char_traitsI[cw]EE4fileEv; _ZN9__gnu_cxx18stdio_sync_filebufI[cw]St11char_traitsI[cw]EE4fileEv;
# pool_alloc
_ZN9__gnu_cxx17__pool_alloc_base9_M_refillE[jm]; _ZN9__gnu_cxx17__pool_alloc_base9_M_refillE[jm];
_ZN9__gnu_cxx17__pool_alloc_base16_M_get_free_listE[jm]; _ZN9__gnu_cxx17__pool_alloc_base16_M_get_free_listE[jm];
_ZN9__gnu_cxx17__pool_alloc_base12_M_get_mutexEv; _ZN9__gnu_cxx17__pool_alloc_base12_M_get_mutexEv;
# mt_alloc
_ZN9__gnu_cxx6__poolILb0EE13_M_initializeEv;
_ZN9__gnu_cxx6__poolILb1EE13_M_initializeEPFvPvE;
_ZN9__gnu_cxx6__poolILb1EE21_M_destroy_thread_keyEPv;
_ZN9__gnu_cxx6__poolILb1EE16_M_get_thread_idEv;
_ZN9__gnu_cxx6__poolILb[01]EE17_M_reserve_memoryE[jm][jm];
_ZN9__gnu_cxx6__poolILb[01]EE17_M_reclaim_memoryEPc[jm];
_ZN9__gnu_cxx20__common_pool_policyILb[01]EE11_S_get_poolEv;
} GLIBCXX_3.4.1; } GLIBCXX_3.4.1;
# Symbols in the support library (libsupc++) have their own tag. # Symbols in the support library (libsupc++) have their own tag.
......
...@@ -42,10 +42,12 @@ initially developed specifically to suit the needs of multi threaded ...@@ -42,10 +42,12 @@ initially developed specifically to suit the needs of multi threaded
applications [hereinafter referred to as an MT application]. Over time applications [hereinafter referred to as an MT application]. Over time
the allocator has evolved and been improved in many ways, one of the the allocator has evolved and been improved in many ways, one of the
being that it now also does a good job in single threaded applications being that it now also does a good job in single threaded applications
[hereinafter referred to as a ST application]. (Note: In this [hereinafter referred to as a ST application]. (Note: In this
document, when referring to single threaded applications this also document, when referring to single threaded applications this also
includes applications that are compiled with gcc without thread includes applications that are compiled with gcc without thread
support enabled. This is accomplished using ifdef's on __GTHREADS) support enabled. This is accomplished using ifdef's on
__GTHREADS). This allocator is tunable, very flexible, and capable of
high-performance.
</p> </p>
<p> <p>
...@@ -54,11 +56,66 @@ view - the "inner workings" of the allocator. ...@@ -54,11 +56,66 @@ view - the "inner workings" of the allocator.
</p> </p>
<h3 class="left"> <h3 class="left">
<a name="design">Design Overview</a>
</h3>
<p> There are three general components to the allocator: a datum
describing the characteristics of the memory pool, a policy class
containing this pool that links instantiation types to common or
individual pools, and a class inheriting from the policy class that is
the actual allocator.
</p>
<p>The datum describing pools characteristics is
<pre>
template&lt;bool _Thread&gt;
class __pool
</pre>
This class is parametrized on thread support, and is explicitly
specialized for both multiple threads (with <code>bool==true</code>)
and single threads (via <code>bool==false</code>.)
</p>
<p> There are two distinct policy classes, each of which can be used
with either type of underlying pool datum.
</p>
<pre>
template&lt;bool _Thread&gt;
struct __common_pool_policy
template&lt;typename _Tp, bool _Thread&gt;
struct __per_type_pool_policy
</pre>
<p> The first policy, <code>__common_pool_policy</code>, implements a
common pool. This means that allocators that are instantiated with
different types, say <code>char</code> and <code>long</code> will both
use the same pool. This is the default policy.
</p>
<p> The second policy, <code>__per_type_pool_policy</code>, implements
a separate pool for each instantiating type. Thus, <code>char</code>
and <code>long</code> will use separate pools. This allows per-type
tuning, for instance.
</p>
<p> Putting this all together, the actual allocator class is
<pre>
template&lt;typename _Tp, typename _Poolp = __default_policy&gt;
class __mt_alloc : public __mt_alloc_base&lt;_Tp&gt;, _Poolp
</pre>
This class has the interface required for standard library allocator
classes, namely member functions <code>allocate</code> and
<code>deallocate</code>, plus others.
</p>
<h3 class="left">
<a name="init">Tunable parameters</a> <a name="init">Tunable parameters</a>
</h3> </h3>
<p>Certain allocation parameters can be modified on a per-type <p>Certain allocation parameters can be modified, or tuned. There
basis. There exists a nested <pre>struct _Tune</pre> that contains all exists a nested <pre>struct __pool_base::_Tune</pre> that contains all
these parameters, which include settings for these parameters, which include settings for
</p> </p>
<ul> <ul>
...@@ -87,16 +144,16 @@ int main() ...@@ -87,16 +144,16 @@ int main()
{ {
typedef pod value_type; typedef pod value_type;
typedef __gnu_cxx::__mt_alloc&lt;value_type&gt; allocator_type; typedef __gnu_cxx::__mt_alloc&lt;value_type&gt; allocator_type;
typedef allocator_type::_Tune tune_type; typedef __gnu_cxx::__pool_base::_Tune tune_type;
tune_type t_default; tune_type t_default;
tune_type t_opt(16, 5120, 32, 5120, 20, 10, false); tune_type t_opt(16, 5120, 32, 5120, 20, 10, false);
tune_type t_single(16, 5120, 32, 5120, 1, 10, false); tune_type t_single(16, 5120, 32, 5120, 1, 10, false);
tune_type t; tune_type t;
t = allocator_type::_S_get_options(); t = allocator_type::_M_get_options();
allocator_type::_S_set_options(t_opt); allocator_type::_M_set_options(t_opt);
t = allocator_type::_S_get_options(); t = allocator_type::_M_get_options();
allocator_type a; allocator_type a;
allocator_type::pointer p1 = a.allocate(128); allocator_type::pointer p1 = a.allocate(128);
...@@ -119,14 +176,15 @@ are initialized as above, or are set to the global defaults. ...@@ -119,14 +176,15 @@ are initialized as above, or are set to the global defaults.
</p> </p>
<p> <p>
The very first allocate() call will always call the _S_init() function. The very first allocate() call will always call the
In order to make sure that this function is called exactly once we make use _S_initialize_once() function. In order to make sure that this
of a __gthread_once (with _S_once_mt and _S_init as arguments) call in MT function is called exactly once we make use of a __gthread_once call
applications and check a static bool (_S_initialized) in ST applications. in MT applications and check a static bool (_S_init) in ST
applications.
</p> </p>
<p> <p>
The _S_init() function: The _S_initialize() function:
- If the GLIBCXX_FORCE_NEW environment variable is set, it sets the bool - If the GLIBCXX_FORCE_NEW environment variable is set, it sets the bool
_S_force_new to true and then returns. This will cause subsequent calls to _S_force_new to true and then returns. This will cause subsequent calls to
allocate() to return memory directly from a new() call, and deallocate will allocate() to return memory directly from a new() call, and deallocate will
......
...@@ -53,159 +53,127 @@ namespace __gnu_cxx ...@@ -53,159 +53,127 @@ namespace __gnu_cxx
* Further details: * Further details:
* http://gcc.gnu.org/onlinedocs/libstdc++/ext/mt_allocator.html * http://gcc.gnu.org/onlinedocs/libstdc++/ext/mt_allocator.html
*/ */
template<typename _Tp> typedef void (*__destroy_handler)(void*);
class __mt_alloc typedef void (*__create_handler)(void);
class __pool_base
{
public:
// Variables used to configure the behavior of the allocator,
// assigned and explained in detail below.
struct _Tune
{ {
public: // Alignment needed.
typedef size_t size_type; // NB: In any case must be >= sizeof(_Block_record), that
typedef ptrdiff_t difference_type; // is 4 on 32 bit machines and 8 on 64 bit machines.
typedef _Tp* pointer; size_t _M_align;
typedef const _Tp* const_pointer;
typedef _Tp& reference; // Allocation requests (after round-up to power of 2) below
typedef const _Tp& const_reference; // this value will be handled by the allocator. A raw new/
typedef _Tp value_type; // call will be used for requests larger than this value.
size_t _M_max_bytes;
template<typename _Tp1>
struct rebind // Size in bytes of the smallest bin.
{ typedef __mt_alloc<_Tp1> other; }; // NB: Must be a power of 2 and >= _M_align.
size_t _M_min_bin;
__mt_alloc() throw()
{ // In order to avoid fragmenting and minimize the number of
// XXX // new() calls we always request new memory using this
} // value. Based on previous discussions on the libstdc++
// mailing list we have choosen the value below.
__mt_alloc(const __mt_alloc&) throw() // See http://gcc.gnu.org/ml/libstdc++/2001-07/msg00077.html
{ size_t _M_chunk_size;
// XXX
} // The maximum number of supported threads. For
// single-threaded operation, use one. Maximum values will
template<typename _Tp1> // vary depending on details of the underlying system. (For
__mt_alloc(const __mt_alloc<_Tp1>& obj) throw() // instance, Linux 2.4.18 reports 4070 in
{ // /proc/sys/kernel/threads-max, while Linux 2.6.6 reports
// XXX // 65534)
} size_t _M_max_threads;
~__mt_alloc() throw() { } // Each time a deallocation occurs in a threaded application
// we make sure that there are no more than
pointer // _M_freelist_headroom % of used memory on the freelist. If
address(reference __x) const // the number of additional records is more than
{ return &__x; } // _M_freelist_headroom % of the freelist, we move these
// records back to the global pool.
const_pointer size_t _M_freelist_headroom;
address(const_reference __x) const
{ return &__x; } // Set to true forces all allocations to use new().
bool _M_force_new;
size_type
max_size() const throw() explicit
{ return size_t(-1) / sizeof(_Tp); } _Tune()
: _M_align(8), _M_max_bytes(128), _M_min_bin(8),
// _GLIBCXX_RESOLVE_LIB_DEFECTS _M_chunk_size(4096 - 4 * sizeof(void*)),
// 402. wrong new expression in [some_] allocator::construct _M_max_threads(4096), _M_freelist_headroom(10),
void _M_force_new(getenv("GLIBCXX_FORCE_NEW") ? true : false)
construct(pointer __p, const _Tp& __val) { }
{ ::new(__p) _Tp(__val); }
explicit
void _Tune(size_t __align, size_t __maxb, size_t __minbin, size_t __chunk,
destroy(pointer __p) { __p->~_Tp(); } size_t __maxthreads, size_t __headroom, bool __force)
: _M_align(__align), _M_max_bytes(__maxb), _M_min_bin(__minbin),
pointer _M_chunk_size(__chunk), _M_max_threads(__maxthreads),
allocate(size_type __n, const void* = 0); _M_freelist_headroom(__headroom), _M_force_new(__force)
{ }
};
const _Tune&
_M_get_options() const
{ return _M_options; }
void void
deallocate(pointer __p, size_type __n); _M_set_options(_Tune __t)
{
if (!_M_init)
_M_options = __t;
}
// Variables used to configure the behavior of the allocator, bool
// assigned and explained in detail below. _M_check_threshold(size_t __bytes)
struct _Tune { return __bytes > _M_options._M_max_bytes || _M_options._M_force_new; }
{
// Alignment needed.
// NB: In any case must be >= sizeof(_Block_record), that
// is 4 on 32 bit machines and 8 on 64 bit machines.
size_t _M_align;
// Allocation requests (after round-up to power of 2) below
// this value will be handled by the allocator. A raw new/
// call will be used for requests larger than this value.
size_t _M_max_bytes;
// Size in bytes of the smallest bin.
// NB: Must be a power of 2 and >= _M_align.
size_t _M_min_bin;
// In order to avoid fragmenting and minimize the number of
// new() calls we always request new memory using this
// value. Based on previous discussions on the libstdc++
// mailing list we have choosen the value below.
// See http://gcc.gnu.org/ml/libstdc++/2001-07/msg00077.html
size_t _M_chunk_size;
// The maximum number of supported threads. For
// single-threaded operation, use one. Maximum values will
// vary depending on details of the underlying system. (For
// instance, Linux 2.4.18 reports 4070 in
// /proc/sys/kernel/threads-max, while Linux 2.6.6 reports
// 65534)
size_t _M_max_threads;
// Each time a deallocation occurs in a threaded application
// we make sure that there are no more than
// _M_freelist_headroom % of used memory on the freelist. If
// the number of additional records is more than
// _M_freelist_headroom % of the freelist, we move these
// records back to the global pool.
size_t _M_freelist_headroom;
// Set to true forces all allocations to use new().
bool _M_force_new;
explicit
_Tune()
: _M_align(8), _M_max_bytes(128), _M_min_bin(8),
_M_chunk_size(4096 - 4 * sizeof(void*)),
_M_max_threads(4096), _M_freelist_headroom(10),
_M_force_new(getenv("GLIBCXX_FORCE_NEW") ? true : false)
{ }
explicit
_Tune(size_t __align, size_t __maxb, size_t __minbin,
size_t __chunk, size_t __maxthreads, size_t __headroom,
bool __force)
: _M_align(__align), _M_max_bytes(__maxb), _M_min_bin(__minbin),
_M_chunk_size(__chunk), _M_max_threads(__maxthreads),
_M_freelist_headroom(__headroom), _M_force_new(__force)
{ }
};
static const _Tune size_t
_S_get_options() _M_get_binmap(size_t __bytes)
{ return _S_options; } { return _M_binmap[__bytes]; }
explicit __pool_base()
: _M_init(false), _M_options(_Tune()), _M_binmap(NULL) { }
protected:
// We need to create the initial lists and set up some variables
// before we can answer to the first request for memory.
bool _M_init;
// Configuration options.
_Tune _M_options;
// Using short int as type for the binmap implies we are never
// caching blocks larger than 65535 with this allocator.
typedef unsigned short int _Binmap_type;
_Binmap_type* _M_binmap;
};
// Data describing the underlying memory pool, parameterized on
// threading support.
template<bool _Thread>
class __pool;
template<>
class __pool<true>;
template<>
class __pool<false>;
static void
_S_set_options(_Tune __t)
{
if (!_S_init)
_S_options = __t;
}
private:
// We need to create the initial lists and set up some variables
// before we can answer to the first request for memory.
#ifdef __GTHREADS #ifdef __GTHREADS
static __gthread_once_t _S_once; // Specialization for thread enabled, via gthreads.h.
#endif template<>
static bool _S_init; class __pool<true> : public __pool_base
{
static void public:
_S_initialize();
// Configuration options.
static _Tune _S_options;
// Using short int as type for the binmap implies we are never
// caching blocks larger than 65535 with this allocator
typedef unsigned short int _Binmap_type;
static _Binmap_type* _S_binmap;
// Each requesting thread is assigned an id ranging from 1 to // Each requesting thread is assigned an id ranging from 1 to
// _S_max_threads. Thread id 0 is used as a global memory pool. // _S_max_threads. Thread id 0 is used as a global memory pool.
// In order to get constant performance on the thread assignment // In order to get constant performance on the thread assignment
...@@ -215,508 +183,503 @@ namespace __gnu_cxx ...@@ -215,508 +183,503 @@ namespace __gnu_cxx
// __gthread_key we specify a destructor. When this destructor // __gthread_key we specify a destructor. When this destructor
// (i.e. the thread dies) is called, we return the thread id to // (i.e. the thread dies) is called, we return the thread id to
// the front of this list. // the front of this list.
#ifdef __GTHREADS
struct _Thread_record struct _Thread_record
{ {
// Points to next free thread id record. NULL if last record in list. // Points to next free thread id record. NULL if last record in list.
_Thread_record* volatile _M_next; _Thread_record* volatile _M_next;
// Thread id ranging from 1 to _S_max_threads. // Thread id ranging from 1 to _S_max_threads.
size_t _M_id; size_t _M_id;
}; };
static _Thread_record* volatile _S_thread_freelist_first;
static __gthread_mutex_t _S_thread_freelist_mutex;
static __gthread_key_t _S_thread_key;
static void
_S_destroy_thread_key(void* __freelist_pos);
#endif
static size_t
_S_get_thread_id();
union _Block_record union _Block_record
{ {
// Points to the block_record of the next free block. // Points to the block_record of the next free block.
_Block_record* volatile _M_next; _Block_record* volatile _M_next;
#ifdef __GTHREADS
// The thread id of the thread which has requested this block. // The thread id of the thread which has requested this block.
size_t _M_thread_id; size_t _M_thread_id;
#endif
}; };
struct _Bin_record struct _Bin_record
{ {
// An "array" of pointers to the first free block for each // An "array" of pointers to the first free block for each
// thread id. Memory to this "array" is allocated in _S_initialize() // thread id. Memory to this "array" is allocated in _S_initialize()
// for _S_max_threads + global pool 0. // for _S_max_threads + global pool 0.
_Block_record** volatile _M_first; _Block_record** volatile _M_first;
#ifdef __GTHREADS
// An "array" of counters used to keep track of the amount of // An "array" of counters used to keep track of the amount of
// blocks that are on the freelist/used for each thread id. // blocks that are on the freelist/used for each thread id.
// Memory to these "arrays" is allocated in _S_initialize() for // Memory to these "arrays" is allocated in _S_initialize() for
// _S_max_threads + global pool 0. // _S_max_threads + global pool 0.
size_t* volatile _M_free; size_t* volatile _M_free;
size_t* volatile _M_used; size_t* volatile _M_used;
// Each bin has its own mutex which is used to ensure data // Each bin has its own mutex which is used to ensure data
// integrity while changing "ownership" on a block. The mutex // integrity while changing "ownership" on a block. The mutex
// is initialized in _S_initialize(). // is initialized in _S_initialize().
__gthread_mutex_t* _M_mutex; __gthread_mutex_t* _M_mutex;
};
void
_M_initialize(__destroy_handler __d);
void
_M_initialize_once(__create_handler __c)
{
// Although the test in __gthread_once() would suffice, we
// wrap test of the once condition in our own unlocked
// check. This saves one function call to pthread_once()
// (which itself only tests for the once value unlocked anyway
// and immediately returns if set)
if (__builtin_expect(_M_init == false, false))
{
if (__gthread_active_p())
__gthread_once(&_M_once, __c);
if (!_M_init)
__c();
}
}
char*
_M_reserve_memory(size_t __bytes, const size_t __thread_id);
void
_M_reclaim_memory(char* __p, size_t __bytes);
const _Bin_record&
_M_get_bin(size_t __which)
{ return _M_bin[__which]; }
void
_M_adjust_freelist(const _Bin_record& __bin, _Block_record* __block,
size_t __thread_id)
{
if (__gthread_active_p())
{
__block->_M_thread_id = __thread_id;
--__bin._M_free[__thread_id];
++__bin._M_used[__thread_id];
}
}
void
_M_destroy_thread_key(void* __freelist_pos);
size_t
_M_get_thread_id();
explicit __pool()
: _M_bin(NULL), _M_bin_size(1), _M_thread_freelist(NULL)
{
// On some platforms, __gthread_once_t is an aggregate.
__gthread_once_t __tmp = __GTHREAD_ONCE_INIT;
_M_once = __tmp;
}
private:
// An "array" of bin_records each of which represents a specific
// power of 2 size. Memory to this "array" is allocated in
// _M_initialize().
_Bin_record* volatile _M_bin;
// Actual value calculated in _M_initialize().
size_t _M_bin_size;
__gthread_once_t _M_once;
_Thread_record* _M_thread_freelist;
};
#endif #endif
// Specialization for single thread.
template<>
class __pool<false> : public __pool_base
{
public:
union _Block_record
{
// Points to the block_record of the next free block.
_Block_record* volatile _M_next;
};
struct _Bin_record
{
// An "array" of pointers to the first free block for each
// thread id. Memory to this "array" is allocated in _S_initialize()
// for _S_max_threads + global pool 0.
_Block_record** volatile _M_first;
}; };
void
_M_initialize_once()
{
if (__builtin_expect(_M_init == false, false))
_M_initialize();
}
char*
_M_reserve_memory(size_t __bytes, const size_t __thread_id);
void
_M_reclaim_memory(char* __p, size_t __bytes);
size_t
_M_get_thread_id() { return 0; }
const _Bin_record&
_M_get_bin(size_t __which)
{ return _M_bin[__which]; }
void
_M_adjust_freelist(const _Bin_record&, _Block_record*, size_t)
{ }
explicit __pool()
: _M_bin(NULL), _M_bin_size(1) { }
private:
// An "array" of bin_records each of which represents a specific // An "array" of bin_records each of which represents a specific
// power of 2 size. Memory to this "array" is allocated in // power of 2 size. Memory to this "array" is allocated in
// _S_initialize(). // _M_initialize().
static _Bin_record* volatile _S_bin; _Bin_record* volatile _M_bin;
// Actual value calculated in _M_initialize().
size_t _M_bin_size;
void
_M_initialize();
};
template<bool _Thread>
struct __common_pool_policy
{
template<typename _Tp1, bool _Thread1 = _Thread>
struct _M_rebind;
template<typename _Tp1>
struct _M_rebind<_Tp1, true>
{ typedef __common_pool_policy<true> other; };
template<typename _Tp1>
struct _M_rebind<_Tp1, false>
{ typedef __common_pool_policy<false> other; };
// Actual value calculated in _S_initialize(). typedef __pool<_Thread> __pool_type;
static size_t _S_bin_size; static __pool_type _S_data;
static __pool_type&
_S_get_pool();
static void
_S_initialize_once()
{
static bool __init;
if (__builtin_expect(__init == false, false))
{
_S_get_pool()._M_initialize_once();
__init = true;
}
}
}; };
template<>
struct __common_pool_policy<true>;
#ifdef __GTHREADS
template<>
struct __common_pool_policy<true>
{
template<typename _Tp1, bool _Thread1 = true>
struct _M_rebind;
template<typename _Tp1>
struct _M_rebind<_Tp1, true>
{ typedef __common_pool_policy<true> other; };
template<typename _Tp1>
struct _M_rebind<_Tp1, false>
{ typedef __common_pool_policy<false> other; };
typedef __pool<true> __pool_type;
static __pool_type _S_data;
static __pool_type&
_S_get_pool();
static void
_S_destroy_thread_key(void* __freelist_pos)
{ _S_get_pool()._M_destroy_thread_key(__freelist_pos); }
static void
_S_initialize()
{ _S_get_pool()._M_initialize(_S_destroy_thread_key); }
static void
_S_initialize_once()
{
static bool __init;
if (__builtin_expect(__init == false, false))
{
_S_get_pool()._M_initialize_once(_S_initialize);
__init = true;
}
}
};
#endif
template<typename _Tp, bool _Thread>
struct __per_type_pool_policy
{
template<typename _Tp1, bool _Thread1 = _Thread>
struct _M_rebind;
template<typename _Tp1>
struct _M_rebind<_Tp1, false>
{ typedef __per_type_pool_policy<_Tp1, false> other; };
template<typename _Tp1>
struct _M_rebind<_Tp1, true>
{ typedef __per_type_pool_policy<_Tp1, true> other; };
typedef __pool<_Thread> __pool_type;
static __pool_type _S_data;
static __pool_type&
_S_get_pool( ) { return _S_data; }
static void
_S_initialize_once()
{
static bool __init;
if (__builtin_expect(__init == false, false))
{
_S_get_pool()._M_initialize_once();
__init = true;
}
}
};
template<typename _Tp, bool _Thread>
__pool<_Thread>
__per_type_pool_policy<_Tp, _Thread>::_S_data;
template<typename _Tp> template<typename _Tp>
typename __mt_alloc<_Tp>::pointer struct __per_type_pool_policy<_Tp, true>;
__mt_alloc<_Tp>::
allocate(size_type __n, const void*) #ifdef __GTHREADS
template<typename _Tp>
struct __per_type_pool_policy<_Tp, true>
{ {
// Although the test in __gthread_once() would suffice, we wrap template<typename _Tp1, bool _Thread1 = true>
// test of the once condition in our own unlocked check. This struct _M_rebind;
// saves one function call to pthread_once() (which itself only
// tests for the once value unlocked anyway and immediately template<typename _Tp1>
// returns if set) struct _M_rebind<_Tp1, false>
if (!_S_init) { typedef __per_type_pool_policy<_Tp1, false> other; };
{
template<typename _Tp1>
struct _M_rebind<_Tp1, true>
{ typedef __per_type_pool_policy<_Tp1, true> other; };
typedef __pool<true> __pool_type;
static __pool_type _S_data;
static __pool_type&
_S_get_pool( ) { return _S_data; }
static void
_S_destroy_thread_key(void* __freelist_pos)
{ _S_get_pool()._M_destroy_thread_key(__freelist_pos); }
static void
_S_initialize()
{ _S_get_pool()._M_initialize(_S_destroy_thread_key); }
static void
_S_initialize_once()
{
static bool __init;
if (__builtin_expect(__init == false, false))
{
_S_get_pool()._M_initialize_once(_S_initialize);
__init = true;
}
}
};
template<typename _Tp>
__pool<true>
__per_type_pool_policy<_Tp, true>::_S_data;
#endif
#ifdef __GTHREADS #ifdef __GTHREADS
if (__gthread_active_p()) typedef __common_pool_policy<true> __default_policy;
__gthread_once(&_S_once, _S_initialize); #else
typedef __common_pool_policy<false> __default_policy;
#endif #endif
if (!_S_init)
_S_initialize(); template<typename _Tp>
class __mt_alloc_base
{
public:
typedef size_t size_type;
typedef ptrdiff_t difference_type;
typedef _Tp* pointer;
typedef const _Tp* const_pointer;
typedef _Tp& reference;
typedef const _Tp& const_reference;
typedef _Tp value_type;
pointer
address(reference __x) const
{ return &__x; }
const_pointer
address(const_reference __x) const
{ return &__x; }
size_type
max_size() const throw()
{ return size_t(-1) / sizeof(_Tp); }
// _GLIBCXX_RESOLVE_LIB_DEFECTS
// 402. wrong new expression in [some_] allocator::construct
void
construct(pointer __p, const _Tp& __val)
{ ::new(__p) _Tp(__val); }
void
destroy(pointer __p) { __p->~_Tp(); }
};
template<typename _Tp, typename _Poolp = __default_policy>
class __mt_alloc : public __mt_alloc_base<_Tp>, _Poolp
{
public:
typedef size_t size_type;
typedef ptrdiff_t difference_type;
typedef _Tp* pointer;
typedef const _Tp* const_pointer;
typedef _Tp& reference;
typedef const _Tp& const_reference;
typedef _Tp value_type;
typedef _Poolp __policy_type;
typedef typename _Poolp::__pool_type __pool_type;
template<typename _Tp1, typename _Poolp1 = _Poolp>
struct rebind
{
typedef typename _Poolp1::template _M_rebind<_Tp1>::other pol_type;
typedef __mt_alloc<_Tp1, pol_type> other;
};
__mt_alloc() throw()
{
// XXX
}
__mt_alloc(const __mt_alloc&) throw()
{
// XXX
}
template<typename _Tp1, typename _Poolp1>
__mt_alloc(const __mt_alloc<_Tp1, _Poolp1>& obj) throw()
{
// XXX
} }
~__mt_alloc() throw() { }
pointer
allocate(size_type __n, const void* = 0);
void
deallocate(pointer __p, size_type __n);
const __pool_base::_Tune
_M_get_options()
{
// Return a copy, not a reference, for external consumption.
return __pool_base::_Tune(this->_S_get_pool()._M_get_options());
}
void
_M_set_options(__pool_base::_Tune __t)
{ this->_S_get_pool()._M_set_options(__t); }
};
template<typename _Tp, typename _Poolp>
typename __mt_alloc<_Tp, _Poolp>::pointer
__mt_alloc<_Tp, _Poolp>::
allocate(size_type __n, const void*)
{
this->_S_initialize_once();
// Requests larger than _M_max_bytes are handled by new/delete // Requests larger than _M_max_bytes are handled by new/delete
// directly. // directly.
__pool_type& __pl = this->_S_get_pool();
const size_t __bytes = __n * sizeof(_Tp); const size_t __bytes = __n * sizeof(_Tp);
if (__bytes > _S_options._M_max_bytes || _S_options._M_force_new) if (__pl._M_check_threshold(__bytes))
{ {
void* __ret = ::operator new(__bytes); void* __ret = ::operator new(__bytes);
return static_cast<_Tp*>(__ret); return static_cast<_Tp*>(__ret);
} }
// Round up to power of 2 and figure out which bin to use. // Round up to power of 2 and figure out which bin to use.
const size_t __which = _S_binmap[__bytes]; const size_t __which = __pl._M_get_binmap(__bytes);
const size_t __thread_id = _S_get_thread_id(); const size_t __thread_id = __pl._M_get_thread_id();
// Find out if we have blocks on our freelist. If so, go ahead // Find out if we have blocks on our freelist. If so, go ahead
// and use them directly without having to lock anything. // and use them directly without having to lock anything.
const _Bin_record& __bin = _S_bin[__which]; char* __c;
_Block_record* __block = NULL; typedef typename __pool_type::_Bin_record _Bin_record;
if (__bin._M_first[__thread_id] == NULL) const _Bin_record& __bin = __pl._M_get_bin(__which);
if (__bin._M_first[__thread_id])
{ {
// NB: For alignment reasons, we can't use the first _M_align // Already reserved.
// bytes, even when sizeof(_Block_record) < _M_align. typedef typename __pool_type::_Block_record _Block_record;
const size_t __bin_size = ((_S_options._M_min_bin << __which) _Block_record* __block = __bin._M_first[__thread_id];
+ _S_options._M_align); __bin._M_first[__thread_id] = __bin._M_first[__thread_id]->_M_next;
size_t __block_count = _S_options._M_chunk_size / __bin_size;
__pl._M_adjust_freelist(__bin, __block, __thread_id);
// Are we using threads? const __pool_base::_Tune& __options = __pl._M_get_options();
// - Yes, check if there are free blocks on the global __c = reinterpret_cast<char*>(__block) + __options._M_align;
// list. If so, grab up to __block_count blocks in one
// lock and change ownership. If the global list is
// empty, we allocate a new chunk and add those blocks
// directly to our own freelist (with us as owner).
// - No, all operations are made directly to global pool 0
// no need to lock or change ownership but check for free
// blocks on global list (and if not add new ones) and
// get the first one.
#ifdef __GTHREADS
if (__gthread_active_p())
{
__gthread_mutex_lock(__bin._M_mutex);
if (__bin._M_first[0] == NULL)
{
// No need to hold the lock when we are adding a
// whole chunk to our own list.
__gthread_mutex_unlock(__bin._M_mutex);
void* __v = ::operator new(_S_options._M_chunk_size);
__bin._M_first[__thread_id] = static_cast<_Block_record*>(__v);
__bin._M_free[__thread_id] = __block_count;
--__block_count;
__block = __bin._M_first[__thread_id];
while (__block_count-- > 0)
{
char* __c = reinterpret_cast<char*>(__block) + __bin_size;
__block->_M_next = reinterpret_cast<_Block_record*>(__c);
__block = __block->_M_next;
}
__block->_M_next = NULL;
}
else
{
// Is the number of required blocks greater than or
// equal to the number that can be provided by the
// global free list?
__bin._M_first[__thread_id] = __bin._M_first[0];
if (__block_count >= __bin._M_free[0])
{
__bin._M_free[__thread_id] = __bin._M_free[0];
__bin._M_free[0] = 0;
__bin._M_first[0] = NULL;
}
else
{
__bin._M_free[__thread_id] = __block_count;
__bin._M_free[0] -= __block_count;
--__block_count;
__block = __bin._M_first[0];
while (__block_count-- > 0)
__block = __block->_M_next;
__bin._M_first[0] = __block->_M_next;
__block->_M_next = NULL;
}
__gthread_mutex_unlock(__bin._M_mutex);
}
}
else
#endif
{
void* __v = ::operator new(_S_options._M_chunk_size);
__bin._M_first[0] = static_cast<_Block_record*>(__v);
--__block_count;
__block = __bin._M_first[0];
while (__block_count-- > 0)
{
char* __c = reinterpret_cast<char*>(__block) + __bin_size;
__block->_M_next = reinterpret_cast<_Block_record*>(__c);
__block = __block->_M_next;
}
__block->_M_next = NULL;
}
} }
else
__block = __bin._M_first[__thread_id];
__bin._M_first[__thread_id] = __bin._M_first[__thread_id]->_M_next;
#ifdef __GTHREADS
if (__gthread_active_p())
{ {
__block->_M_thread_id = __thread_id; // Null, reserve.
--__bin._M_free[__thread_id]; __c = __pl._M_reserve_memory(__bytes, __thread_id);
++__bin._M_used[__thread_id];
} }
#endif
char* __c = reinterpret_cast<char*>(__block) + _S_options._M_align;
return static_cast<_Tp*>(static_cast<void*>(__c)); return static_cast<_Tp*>(static_cast<void*>(__c));
} }
template<typename _Tp> template<typename _Tp, typename _Poolp>
void void
__mt_alloc<_Tp>:: __mt_alloc<_Tp, _Poolp>::
deallocate(pointer __p, size_type __n) deallocate(pointer __p, size_type __n)
{ {
// Requests larger than _M_max_bytes are handled by operators // Requests larger than _M_max_bytes are handled by operators
// new/delete directly. // new/delete directly.
__pool_type& __pl = this->_S_get_pool();
const size_t __bytes = __n * sizeof(_Tp); const size_t __bytes = __n * sizeof(_Tp);
if (__bytes > _S_options._M_max_bytes || _S_options._M_force_new) if (__pl._M_check_threshold(__bytes))
{ ::operator delete(__p);
::operator delete(__p);
return;
}
// Round up to power of 2 and figure out which bin to use.
const size_t __which = _S_binmap[__bytes];
const _Bin_record& __bin = _S_bin[__which];
char* __c = reinterpret_cast<char*>(__p) - _S_options._M_align;
_Block_record* __block = reinterpret_cast<_Block_record*>(__c);
#ifdef __GTHREADS
if (__gthread_active_p())
{
// Calculate the number of records to remove from our freelist:
// in order to avoid too much contention we wait until the
// number of records is "high enough".
const size_t __thread_id = _S_get_thread_id();
long __remove = ((__bin._M_free[__thread_id]
* _S_options._M_freelist_headroom)
- __bin._M_used[__thread_id]);
if (__remove > static_cast<long>(100 * (_S_bin_size - __which)
* _S_options._M_freelist_headroom)
&& __remove > static_cast<long>(__bin._M_free[__thread_id]))
{
_Block_record* __tmp = __bin._M_first[__thread_id];
_Block_record* __first = __tmp;
__remove /= _S_options._M_freelist_headroom;
const long __removed = __remove;
--__remove;
while (__remove-- > 0)
__tmp = __tmp->_M_next;
__bin._M_first[__thread_id] = __tmp->_M_next;
__bin._M_free[__thread_id] -= __removed;
__gthread_mutex_lock(__bin._M_mutex);
__tmp->_M_next = __bin._M_first[0];
__bin._M_first[0] = __first;
__bin._M_free[0] += __removed;
__gthread_mutex_unlock(__bin._M_mutex);
}
// Return this block to our list and update counters and
// owner id as needed.
--__bin._M_used[__block->_M_thread_id];
__block->_M_next = __bin._M_first[__thread_id];
__bin._M_first[__thread_id] = __block;
++__bin._M_free[__thread_id];
}
else else
#endif __pl._M_reclaim_memory(reinterpret_cast<char*>(__p), __bytes);
{
// Single threaded application - return to global pool.
__block->_M_next = __bin._M_first[0];
__bin._M_first[0] = __block;
}
} }
template<typename _Tp> template<typename _Tp, typename _Poolp>
void
__mt_alloc<_Tp>::
_S_initialize()
{
// This method is called on the first allocation (when _S_init is still
// false) to create the bins.
// Ensure that the static initialization of _S_options has
// happened. This depends on (a) _M_align == 0 being an invalid
// value that is only present at startup, and (b) the real
// static initialization that happens later not actually
// changing anything.
if (_S_options._M_align == 0)
new (&_S_options) _Tune;
// _M_force_new must not change after the first allocate(),
// which in turn calls this method, so if it's false, it's false
// forever and we don't need to return here ever again.
if (_S_options._M_force_new)
{
_S_init = true;
return;
}
// Calculate the number of bins required based on _M_max_bytes.
// _S_bin_size is statically-initialized to one.
size_t __bin_size = _S_options._M_min_bin;
while (_S_options._M_max_bytes > __bin_size)
{
__bin_size <<= 1;
++_S_bin_size;
}
// Setup the bin map for quick lookup of the relevant bin.
const size_t __j = (_S_options._M_max_bytes + 1) * sizeof(_Binmap_type);
_S_binmap = static_cast<_Binmap_type*>(::operator new(__j));
_Binmap_type* __bp = _S_binmap;
_Binmap_type __bin_max = _S_options._M_min_bin;
_Binmap_type __bint = 0;
for (_Binmap_type __ct = 0; __ct <= _S_options._M_max_bytes; ++__ct)
{
if (__ct > __bin_max)
{
__bin_max <<= 1;
++__bint;
}
*__bp++ = __bint;
}
// Initialize _S_bin and its members.
void* __v = ::operator new(sizeof(_Bin_record) * _S_bin_size);
_S_bin = static_cast<_Bin_record*>(__v);
// If __gthread_active_p() create and initialize the list of
// free thread ids. Single threaded applications use thread id 0
// directly and have no need for this.
#ifdef __GTHREADS
if (__gthread_active_p())
{
const size_t __k = sizeof(_Thread_record) * _S_options._M_max_threads;
__v = ::operator new(__k);
_S_thread_freelist_first = static_cast<_Thread_record*>(__v);
// NOTE! The first assignable thread id is 1 since the
// global pool uses id 0
size_t __i;
for (__i = 1; __i < _S_options._M_max_threads; ++__i)
{
_Thread_record& __tr = _S_thread_freelist_first[__i - 1];
__tr._M_next = &_S_thread_freelist_first[__i];
__tr._M_id = __i;
}
// Set last record.
_S_thread_freelist_first[__i - 1]._M_next = NULL;
_S_thread_freelist_first[__i - 1]._M_id = __i;
// Make sure this is initialized.
#ifndef __GTHREAD_MUTEX_INIT
__GTHREAD_MUTEX_INIT_FUNCTION(&_S_thread_freelist_mutex);
#endif
// Initialize per thread key to hold pointer to
// _S_thread_freelist.
__gthread_key_create(&_S_thread_key, _S_destroy_thread_key);
const size_t __max_threads = _S_options._M_max_threads + 1;
for (size_t __n = 0; __n < _S_bin_size; ++__n)
{
_Bin_record& __bin = _S_bin[__n];
__v = ::operator new(sizeof(_Block_record*) * __max_threads);
__bin._M_first = static_cast<_Block_record**>(__v);
__v = ::operator new(sizeof(size_t) * __max_threads);
__bin._M_free = static_cast<size_t*>(__v);
__v = ::operator new(sizeof(size_t) * __max_threads);
__bin._M_used = static_cast<size_t*>(__v);
__v = ::operator new(sizeof(__gthread_mutex_t));
__bin._M_mutex = static_cast<__gthread_mutex_t*>(__v);
#ifdef __GTHREAD_MUTEX_INIT
{
// Do not copy a POSIX/gthr mutex once in use.
__gthread_mutex_t __tmp = __GTHREAD_MUTEX_INIT;
*__bin._M_mutex = __tmp;
}
#else
{ __GTHREAD_MUTEX_INIT_FUNCTION(__bin._M_mutex); }
#endif
for (size_t __threadn = 0; __threadn < __max_threads;
++__threadn)
{
__bin._M_first[__threadn] = NULL;
__bin._M_free[__threadn] = 0;
__bin._M_used[__threadn] = 0;
}
}
}
else
#endif
for (size_t __n = 0; __n < _S_bin_size; ++__n)
{
_Bin_record& __bin = _S_bin[__n];
__v = ::operator new(sizeof(_Block_record*));
__bin._M_first = static_cast<_Block_record**>(__v);
__bin._M_first[0] = NULL;
}
_S_init = true;
}
template<typename _Tp>
size_t
__mt_alloc<_Tp>::
_S_get_thread_id()
{
#ifdef __GTHREADS
// If we have thread support and it's active we check the thread
// key value and return its id or if it's not set we take the
// first record from _S_thread_freelist and sets the key and
// returns it's id.
if (__gthread_active_p())
{
_Thread_record* __freelist_pos =
static_cast<_Thread_record*>(__gthread_getspecific(_S_thread_key));
if (__freelist_pos == NULL)
{
// Since _S_options._M_max_threads must be larger than
// the theoretical max number of threads of the OS the
// list can never be empty.
__gthread_mutex_lock(&_S_thread_freelist_mutex);
__freelist_pos = _S_thread_freelist_first;
_S_thread_freelist_first = _S_thread_freelist_first->_M_next;
__gthread_mutex_unlock(&_S_thread_freelist_mutex);
__gthread_setspecific(_S_thread_key,
static_cast<void*>(__freelist_pos));
}
return __freelist_pos->_M_id;
}
#endif
// Otherwise (no thread support or inactive) all requests are
// served from the global pool 0.
return 0;
}
#ifdef __GTHREADS
template<typename _Tp>
void
__mt_alloc<_Tp>::
_S_destroy_thread_key(void* __freelist_pos)
{
// Return this thread id record to front of thread_freelist.
__gthread_mutex_lock(&_S_thread_freelist_mutex);
_Thread_record* __tr = static_cast<_Thread_record*>(__freelist_pos);
__tr->_M_next = _S_thread_freelist_first;
_S_thread_freelist_first = __tr;
__gthread_mutex_unlock(&_S_thread_freelist_mutex);
}
#endif
template<typename _Tp>
inline bool inline bool
operator==(const __mt_alloc<_Tp>&, const __mt_alloc<_Tp>&) operator==(const __mt_alloc<_Tp, _Poolp>&, const __mt_alloc<_Tp, _Poolp>&)
{ return true; } { return true; }
template<typename _Tp> template<typename _Tp, typename _Poolp>
inline bool inline bool
operator!=(const __mt_alloc<_Tp>&, const __mt_alloc<_Tp>&) operator!=(const __mt_alloc<_Tp, _Poolp>&, const __mt_alloc<_Tp, _Poolp>&)
{ return false; } { return false; }
template<typename _Tp>
bool __mt_alloc<_Tp>::_S_init = false;
template<typename _Tp>
typename __mt_alloc<_Tp>::_Tune __mt_alloc<_Tp>::_S_options;
template<typename _Tp>
typename __mt_alloc<_Tp>::_Binmap_type* __mt_alloc<_Tp>::_S_binmap;
template<typename _Tp>
typename __mt_alloc<_Tp>::_Bin_record* volatile __mt_alloc<_Tp>::_S_bin;
template<typename _Tp>
size_t __mt_alloc<_Tp>::_S_bin_size = 1;
// Actual initialization in _S_initialize().
#ifdef __GTHREADS
template<typename _Tp>
__gthread_once_t __mt_alloc<_Tp>::_S_once = __GTHREAD_ONCE_INIT;
template<typename _Tp>
typename __mt_alloc<_Tp>::_Thread_record*
volatile __mt_alloc<_Tp>::_S_thread_freelist_first = NULL;
template<typename _Tp>
__gthread_key_t __mt_alloc<_Tp>::_S_thread_key;
template<typename _Tp>
__gthread_mutex_t
#ifdef __GTHREAD_MUTEX_INIT
__mt_alloc<_Tp>::_S_thread_freelist_mutex = __GTHREAD_MUTEX_INIT;
#else
__mt_alloc<_Tp>::_S_thread_freelist_mutex;
#endif
#endif
} // namespace __gnu_cxx } // namespace __gnu_cxx
#endif #endif
...@@ -96,7 +96,8 @@ basic_file.cc: ${glibcxx_srcdir}/$(BASIC_FILE_CC) ...@@ -96,7 +96,8 @@ basic_file.cc: ${glibcxx_srcdir}/$(BASIC_FILE_CC)
# Sources present in the src directory. # Sources present in the src directory.
sources = \ sources = \
allocator.cc \ pool_allocator.cc \
mt_allocator.cc \
codecvt.cc \ codecvt.cc \
complex_io.cc \ complex_io.cc \
ctype.cc \ ctype.cc \
......
...@@ -64,14 +64,15 @@ am__objects_1 = atomicity.lo codecvt_members.lo collate_members.lo \ ...@@ -64,14 +64,15 @@ am__objects_1 = atomicity.lo codecvt_members.lo collate_members.lo \
ctype_members.lo messages_members.lo monetary_members.lo \ ctype_members.lo messages_members.lo monetary_members.lo \
numeric_members.lo time_members.lo numeric_members.lo time_members.lo
am__objects_2 = basic_file.lo c++locale.lo am__objects_2 = basic_file.lo c++locale.lo
am__objects_3 = allocator.lo codecvt.lo complex_io.lo ctype.lo \ am__objects_3 = pool_allocator.lo mt_allocator.lo codecvt.lo \
debug.lo debug_list.lo functexcept.lo globals_locale.lo \ complex_io.lo ctype.lo debug.lo debug_list.lo functexcept.lo \
globals_io.lo ios.lo ios_failure.lo ios_init.lo ios_locale.lo \ globals_locale.lo globals_io.lo ios.lo ios_failure.lo \
limits.lo list.lo locale.lo locale_init.lo locale_facets.lo \ ios_init.lo ios_locale.lo limits.lo list.lo locale.lo \
localename.lo stdexcept.lo strstream.lo tree.lo \ locale_init.lo locale_facets.lo localename.lo stdexcept.lo \
allocator-inst.lo concept-inst.lo fstream-inst.lo ext-inst.lo \ strstream.lo tree.lo allocator-inst.lo concept-inst.lo \
io-inst.lo istream-inst.lo locale-inst.lo locale-misc-inst.lo \ fstream-inst.lo ext-inst.lo io-inst.lo istream-inst.lo \
misc-inst.lo ostream-inst.lo sstream-inst.lo streambuf-inst.lo \ locale-inst.lo locale-misc-inst.lo misc-inst.lo \
ostream-inst.lo sstream-inst.lo streambuf-inst.lo \
string-inst.lo valarray-inst.lo wlocale-inst.lo \ string-inst.lo valarray-inst.lo wlocale-inst.lo \
wstring-inst.lo $(am__objects_1) $(am__objects_2) wstring-inst.lo $(am__objects_1) $(am__objects_2)
am_libstdc___la_OBJECTS = $(am__objects_3) am_libstdc___la_OBJECTS = $(am__objects_3)
...@@ -303,7 +304,8 @@ host_sources_extra = \ ...@@ -303,7 +304,8 @@ host_sources_extra = \
# Sources present in the src directory. # Sources present in the src directory.
sources = \ sources = \
allocator.cc \ pool_allocator.cc \
mt_allocator.cc \
codecvt.cc \ codecvt.cc \
complex_io.cc \ complex_io.cc \
ctype.cc \ ctype.cc \
......
...@@ -32,20 +32,9 @@ ...@@ -32,20 +32,9 @@
// //
#include <memory> #include <memory>
#include <ext/mt_allocator.h>
#include <ext/pool_allocator.h>
namespace std namespace std
{ {
template class allocator<char>; template class allocator<char>;
template class allocator<wchar_t>; template class allocator<wchar_t>;
} // namespace std } // namespace std
namespace __gnu_cxx
{
template class __mt_alloc<char>;
template class __mt_alloc<wchar_t>;
template class __pool_alloc<char>;
template class __pool_alloc<wchar_t>;
} // namespace __gnu_cxx
// Allocator details.
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Librarbooly. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// As a special exception, you may use this file as part of a free software
// library without restriction. Specifically, if other files instantiate
// templates or use macros or inline functions from this file, or you compile
// this file and link it with other files to produce an executable, this
// file does not by itself cause the resulting executable to be covered by
// the GNU General Public License. This exception does not however
// invalidate any other reasons why the executable file might be covered by
// the GNU General Public License.
//
// ISO C++ 14882:
//
#include <bits/c++config.h>
#include <ext/mt_allocator.h>
#include <bits/concurrence.h>
namespace __gnu_internal
{
__glibcxx_mutex_define_initialized(freelist_mutex);
#ifdef __GTHREADS
__gthread_key_t freelist_key;
#endif
}
namespace __gnu_cxx
{
#ifdef __GTHREADS
void
__pool<true>::_M_reclaim_memory(char* __p, size_t __bytes)
{
// Round up to power of 2 and figure out which bin to use.
const size_t __which = _M_binmap[__bytes];
const _Bin_record& __bin = _M_bin[__which];
const _Tune& __options = _M_get_options();
char* __c = __p - __options._M_align;
_Block_record* __block = reinterpret_cast<_Block_record*>(__c);
if (__gthread_active_p())
{
// Calculate the number of records to remove from our freelist:
// in order to avoid too much contention we wait until the
// number of records is "high enough".
const size_t __thread_id = _M_get_thread_id();
long __remove = ((__bin._M_free[__thread_id]
* __options._M_freelist_headroom)
- __bin._M_used[__thread_id]);
if (__remove > static_cast<long>(100 * (_M_bin_size - __which)
* __options._M_freelist_headroom)
&& __remove > static_cast<long>(__bin._M_free[__thread_id]))
{
_Block_record* __tmp = __bin._M_first[__thread_id];
_Block_record* __first = __tmp;
__remove /= __options._M_freelist_headroom;
const long __removed = __remove;
--__remove;
while (__remove-- > 0)
__tmp = __tmp->_M_next;
__bin._M_first[__thread_id] = __tmp->_M_next;
__bin._M_free[__thread_id] -= __removed;
__gthread_mutex_lock(__bin._M_mutex);
__tmp->_M_next = __bin._M_first[0];
__bin._M_first[0] = __first;
__bin._M_free[0] += __removed;
__gthread_mutex_unlock(__bin._M_mutex);
}
// Return this block to our list and update counters and
// owner id as needed.
--__bin._M_used[__block->_M_thread_id];
__block->_M_next = __bin._M_first[__thread_id];
__bin._M_first[__thread_id] = __block;
++__bin._M_free[__thread_id];
}
else
{
// Not using threads, so single threaded application - return
// to global pool.
__block->_M_next = __bin._M_first[0];
__bin._M_first[0] = __block;
}
}
#endif
void
__pool<false>::_M_reclaim_memory(char* __p, size_t __bytes)
{
// Round up to power of 2 and figure out which bin to use.
const size_t __which = _M_binmap[__bytes];
const _Bin_record& __bin = _M_bin[__which];
const _Tune& __options = _M_get_options();
char* __c = __p - __options._M_align;
_Block_record* __block = reinterpret_cast<_Block_record*>(__c);
// Single threaded application - return to global pool.
__block->_M_next = __bin._M_first[0];
__bin._M_first[0] = __block;
}
#ifdef __GTHREADS
char*
__pool<true>::_M_reserve_memory(size_t __bytes, const size_t __thread_id)
{
// Round up to power of 2 and figure out which bin to use.
const size_t __which = _M_binmap[__bytes];
// If here, there are no blocks on our freelist.
const _Tune& __options = _M_get_options();
_Block_record* __block = NULL;
const _Bin_record& __bin = _M_bin[__which];
// NB: For alignment reasons, we can't use the first _M_align
// bytes, even when sizeof(_Block_record) < _M_align.
const size_t __bin_size = ((__options._M_min_bin << __which)
+ __options._M_align);
size_t __block_count = __options._M_chunk_size / __bin_size;
// Are we using threads?
// - Yes, check if there are free blocks on the global
// list. If so, grab up to __block_count blocks in one
// lock and change ownership. If the global list is
// empty, we allocate a new chunk and add those blocks
// directly to our own freelist (with us as owner).
// - No, all operations are made directly to global pool 0
// no need to lock or change ownership but check for free
// blocks on global list (and if not add new ones) and
// get the first one.
if (__gthread_active_p())
{
__gthread_mutex_lock(__bin._M_mutex);
if (__bin._M_first[0] == NULL)
{
// No need to hold the lock when we are adding a
// whole chunk to our own list.
__gthread_mutex_unlock(__bin._M_mutex);
void* __v = ::operator new(__options._M_chunk_size);
__bin._M_first[__thread_id] = static_cast<_Block_record*>(__v);
__bin._M_free[__thread_id] = __block_count;
--__block_count;
__block = __bin._M_first[__thread_id];
while (__block_count-- > 0)
{
char* __c = reinterpret_cast<char*>(__block) + __bin_size;
__block->_M_next = reinterpret_cast<_Block_record*>(__c);
__block = __block->_M_next;
}
__block->_M_next = NULL;
}
else
{
// Is the number of required blocks greater than or
// equal to the number that can be provided by the
// global free list?
__bin._M_first[__thread_id] = __bin._M_first[0];
if (__block_count >= __bin._M_free[0])
{
__bin._M_free[__thread_id] = __bin._M_free[0];
__bin._M_free[0] = 0;
__bin._M_first[0] = NULL;
}
else
{
__bin._M_free[__thread_id] = __block_count;
__bin._M_free[0] -= __block_count;
--__block_count;
__block = __bin._M_first[0];
while (__block_count-- > 0)
__block = __block->_M_next;
__bin._M_first[0] = __block->_M_next;
__block->_M_next = NULL;
}
__gthread_mutex_unlock(__bin._M_mutex);
}
}
else
{
void* __v = ::operator new(__options._M_chunk_size);
__bin._M_first[0] = static_cast<_Block_record*>(__v);
--__block_count;
__block = __bin._M_first[0];
while (__block_count-- > 0)
{
char* __c = reinterpret_cast<char*>(__block) + __bin_size;
__block->_M_next = reinterpret_cast<_Block_record*>(__c);
__block = __block->_M_next;
}
__block->_M_next = NULL;
}
__block = __bin._M_first[__thread_id];
__bin._M_first[__thread_id] = __bin._M_first[__thread_id]->_M_next;
if (__gthread_active_p())
{
__block->_M_thread_id = __thread_id;
--__bin._M_free[__thread_id];
++__bin._M_used[__thread_id];
}
return reinterpret_cast<char*>(__block) + __options._M_align;
}
#endif
char*
__pool<false>::_M_reserve_memory(size_t __bytes, const size_t __thread_id)
{
// Round up to power of 2 and figure out which bin to use.
const size_t __which = _M_binmap[__bytes];
// If here, there are no blocks on our freelist.
const _Tune& __options = _M_get_options();
_Block_record* __block = NULL;
const _Bin_record& __bin = _M_bin[__which];
// NB: For alignment reasons, we can't use the first _M_align
// bytes, even when sizeof(_Block_record) < _M_align.
const size_t __bin_size = ((__options._M_min_bin << __which)
+ __options._M_align);
size_t __block_count = __options._M_chunk_size / __bin_size;
// Not using threads.
void* __v = ::operator new(__options._M_chunk_size);
__bin._M_first[0] = static_cast<_Block_record*>(__v);
--__block_count;
__block = __bin._M_first[0];
while (__block_count-- > 0)
{
char* __c = reinterpret_cast<char*>(__block) + __bin_size;
__block->_M_next = reinterpret_cast<_Block_record*>(__c);
__block = __block->_M_next;
}
__block->_M_next = NULL;
__block = __bin._M_first[__thread_id];
__bin._M_first[__thread_id] = __bin._M_first[__thread_id]->_M_next;
return reinterpret_cast<char*>(__block) + __options._M_align;
}
#ifdef __GTHREADS
void
__pool<true>::_M_initialize(__destroy_handler __d)
{
// This method is called on the first allocation (when _M_init
// is still false) to create the bins.
// _M_force_new must not change after the first allocate(),
// which in turn calls this method, so if it's false, it's false
// forever and we don't need to return here ever again.
if (_M_options._M_force_new)
{
_M_init = true;
return;
}
// Calculate the number of bins required based on _M_max_bytes.
// _M_bin_size is statically-initialized to one.
size_t __bin_size = _M_options._M_min_bin;
while (_M_options._M_max_bytes > __bin_size)
{
__bin_size <<= 1;
++_M_bin_size;
}
// Setup the bin map for quick lookup of the relevant bin.
const size_t __j = (_M_options._M_max_bytes + 1) * sizeof(_Binmap_type);
_M_binmap = static_cast<_Binmap_type*>(::operator new(__j));
_Binmap_type* __bp = _M_binmap;
_Binmap_type __bin_max = _M_options._M_min_bin;
_Binmap_type __bint = 0;
for (_Binmap_type __ct = 0; __ct <= _M_options._M_max_bytes; ++__ct)
{
if (__ct > __bin_max)
{
__bin_max <<= 1;
++__bint;
}
*__bp++ = __bint;
}
// Initialize _M_bin and its members.
void* __v = ::operator new(sizeof(_Bin_record) * _M_bin_size);
_M_bin = static_cast<_Bin_record*>(__v);
// If __gthread_active_p() create and initialize the list of
// free thread ids. Single threaded applications use thread id 0
// directly and have no need for this.
if (__gthread_active_p())
{
const size_t __k = sizeof(_Thread_record) * _M_options._M_max_threads;
__v = ::operator new(__k);
_M_thread_freelist = static_cast<_Thread_record*>(__v);
// NOTE! The first assignable thread id is 1 since the
// global pool uses id 0
size_t __i;
for (__i = 1; __i < _M_options._M_max_threads; ++__i)
{
_Thread_record& __tr = _M_thread_freelist[__i - 1];
__tr._M_next = &_M_thread_freelist[__i];
__tr._M_id = __i;
}
// Set last record.
_M_thread_freelist[__i - 1]._M_next = NULL;
_M_thread_freelist[__i - 1]._M_id = __i;
// Initialize per thread key to hold pointer to
// _M_thread_freelist.
__gthread_key_create(&__gnu_internal::freelist_key, __d);
const size_t __max_threads = _M_options._M_max_threads + 1;
for (size_t __n = 0; __n < _M_bin_size; ++__n)
{
_Bin_record& __bin = _M_bin[__n];
__v = ::operator new(sizeof(_Block_record*) * __max_threads);
__bin._M_first = static_cast<_Block_record**>(__v);
__v = ::operator new(sizeof(size_t) * __max_threads);
__bin._M_free = static_cast<size_t*>(__v);
__v = ::operator new(sizeof(size_t) * __max_threads);
__bin._M_used = static_cast<size_t*>(__v);
__v = ::operator new(sizeof(__gthread_mutex_t));
__bin._M_mutex = static_cast<__gthread_mutex_t*>(__v);
#ifdef __GTHREAD_MUTEX_INIT
{
// Do not copy a POSIX/gthr mutex once in use.
__gthread_mutex_t __tmp = __GTHREAD_MUTEX_INIT;
*__bin._M_mutex = __tmp;
}
#else
{ __GTHREAD_MUTEX_INIT_FUNCTION(__bin._M_mutex); }
#endif
for (size_t __threadn = 0; __threadn < __max_threads;
++__threadn)
{
__bin._M_first[__threadn] = NULL;
__bin._M_free[__threadn] = 0;
__bin._M_used[__threadn] = 0;
}
}
}
else
for (size_t __n = 0; __n < _M_bin_size; ++__n)
{
_Bin_record& __bin = _M_bin[__n];
__v = ::operator new(sizeof(_Block_record*));
__bin._M_first = static_cast<_Block_record**>(__v);
__bin._M_first[0] = NULL;
}
_M_init = true;
}
#endif
void
__pool<false>::_M_initialize()
{
// This method is called on the first allocation (when _M_init
// is still false) to create the bins.
// _M_force_new must not change after the first allocate(),
// which in turn calls this method, so if it's false, it's false
// forever and we don't need to return here ever again.
if (_M_options._M_force_new)
{
_M_init = true;
return;
}
// Calculate the number of bins required based on _M_max_bytes.
// _M_bin_size is statically-initialized to one.
size_t __bin_size = _M_options._M_min_bin;
while (_M_options._M_max_bytes > __bin_size)
{
__bin_size <<= 1;
++_M_bin_size;
}
// Setup the bin map for quick lookup of the relevant bin.
const size_t __j = (_M_options._M_max_bytes + 1) * sizeof(_Binmap_type);
_M_binmap = static_cast<_Binmap_type*>(::operator new(__j));
_Binmap_type* __bp = _M_binmap;
_Binmap_type __bin_max = _M_options._M_min_bin;
_Binmap_type __bint = 0;
for (_Binmap_type __ct = 0; __ct <= _M_options._M_max_bytes; ++__ct)
{
if (__ct > __bin_max)
{
__bin_max <<= 1;
++__bint;
}
*__bp++ = __bint;
}
// Initialize _M_bin and its members.
void* __v = ::operator new(sizeof(_Bin_record) * _M_bin_size);
_M_bin = static_cast<_Bin_record*>(__v);
for (size_t __n = 0; __n < _M_bin_size; ++__n)
{
_Bin_record& __bin = _M_bin[__n];
__v = ::operator new(sizeof(_Block_record*));
__bin._M_first = static_cast<_Block_record**>(__v);
__bin._M_first[0] = NULL;
}
_M_init = true;
}
#ifdef __GTHREADS
size_t
__pool<true>::_M_get_thread_id()
{
// If we have thread support and it's active we check the thread
// key value and return its id or if it's not set we take the
// first record from _M_thread_freelist and sets the key and
// returns it's id.
if (__gthread_active_p())
{
void* v = __gthread_getspecific(__gnu_internal::freelist_key);
_Thread_record* __freelist_pos = static_cast<_Thread_record*>(v);
if (__freelist_pos == NULL)
{
// Since _M_options._M_max_threads must be larger than
// the theoretical max number of threads of the OS the
// list can never be empty.
{
__gnu_cxx::lock sentry(__gnu_internal::freelist_mutex);
__freelist_pos = _M_thread_freelist;
_M_thread_freelist = _M_thread_freelist->_M_next;
}
__gthread_setspecific(__gnu_internal::freelist_key,
static_cast<void*>(__freelist_pos));
}
return __freelist_pos->_M_id;
}
// Otherwise (no thread support or inactive) all requests are
// served from the global pool 0.
return 0;
}
void
__pool<true>::_M_destroy_thread_key(void* __freelist_pos)
{
// Return this thread id record to front of thread_freelist.
__gnu_cxx::lock sentry(__gnu_internal::freelist_mutex);
_Thread_record* __tr = static_cast<_Thread_record*>(__freelist_pos);
__tr->_M_next = _M_thread_freelist;
_M_thread_freelist = __tr;
}
#endif
// Definitions for non-exported bits of __common_pool.
#ifdef __GTHREADS
__pool<true>
__common_pool_policy<true>::_S_data = __pool<true>();
__pool<true>&
__common_pool_policy<true>::_S_get_pool() { return _S_data; }
#endif
template<>
__pool<false>
__common_pool_policy<false>::_S_data = __pool<false>();
template<>
__pool<false>&
__common_pool_policy<false>::_S_get_pool() { return _S_data; }
// Instantiations.
template class __mt_alloc<char>;
template class __mt_alloc<wchar_t>;
} // namespace __gnu_cxx
...@@ -32,8 +32,7 @@ ...@@ -32,8 +32,7 @@
// //
#include <bits/c++config.h> #include <bits/c++config.h>
#include <memory> #include <cstdlib>
#include <ext/mt_allocator.h>
#include <ext/pool_allocator.h> #include <ext/pool_allocator.h>
namespace __gnu_internal namespace __gnu_internal
...@@ -166,4 +165,8 @@ namespace __gnu_cxx ...@@ -166,4 +165,8 @@ namespace __gnu_cxx
char* __pool_alloc_base::_S_end_free = 0; char* __pool_alloc_base::_S_end_free = 0;
size_t __pool_alloc_base::_S_heap_size = 0; size_t __pool_alloc_base::_S_heap_size = 0;
// Instantiations.
template class __pool_alloc<char>;
template class __pool_alloc<wchar_t>;
} // namespace __gnu_cxx } // namespace __gnu_cxx
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org> // 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
// //
// Copyright (C) 2001, 2003 Free Software Foundation, Inc. // Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
// //
// This file is part of the GNU ISO C++ Library. This library is free // This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the // software; you can redistribute it and/or modify it under the
...@@ -21,29 +21,13 @@ ...@@ -21,29 +21,13 @@
// 20.4.1.1 allocator members // 20.4.1.1 allocator members
#include <cstdlib> #include <cstdlib>
#include <memory>
//#include <ext/pool_allocator.h>
#include <ext/debug_allocator.h> #include <ext/debug_allocator.h>
#include <ext/malloc_allocator.h> #include <ext/malloc_allocator.h>
#include <testsuite_hooks.h> #include <testsuite_allocator.h>
using __gnu_cxx::malloc_allocator; using __gnu_cxx::malloc_allocator;
using __gnu_cxx::debug_allocator; using __gnu_cxx::debug_allocator;
template class malloc_allocator<int>;
template class debug_allocator<malloc_allocator<int> >;
#if 0
using __gnu_cxx::__pool_alloc;
template class __pool_alloc<true, 3>;
template class __pool_alloc<false, 3>;
#endif
bool new_called;
bool delete_called;
std::size_t requested;
void* void*
operator new(std::size_t n) throw(std::bad_alloc) operator new(std::size_t n) throw(std::bad_alloc)
{ {
...@@ -59,47 +43,14 @@ operator delete(void *v) throw() ...@@ -59,47 +43,14 @@ operator delete(void *v) throw()
return std::free(v); return std::free(v);
} }
template<typename Alloc, bool uses_global_new_and_delete> bool test02()
void check_allocator() {
{ typedef debug_allocator<malloc_allocator<unsigned int> > allocator_type;
bool test __attribute__((unused)) = true; return (__gnu_test::check_new<allocator_type, false>() == false);
new_called = false;
delete_called = false;
requested = 0;
Alloc a;
typename Alloc::pointer p = a.allocate(10);
if (uses_global_new_and_delete)
VERIFY( requested >= (10 * 15 * sizeof(long)) );
VERIFY( new_called == uses_global_new_and_delete );
a.deallocate(p, 10);
VERIFY( delete_called == uses_global_new_and_delete );
} }
// These just help tracking down error messages.
void test01()
{ check_allocator<malloc_allocator<int>, false>(); }
void test02()
{ check_allocator<debug_allocator<malloc_allocator<int> >, false>(); }
#if 0
void test03()
{ check_allocator<__pool_alloc<true, 3>, true>(); }
void test04()
{ check_allocator<__pool_alloc<false, 3>, true>(); }
#endif
int main() int main()
{ {
test01(); return test02();
test02();
#if 0
test03();
test04();
#endif
return 0;
} }
// { dg-do compile }
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/debug_allocator.h>
#include <ext/malloc_allocator.h>
template class __gnu_cxx::debug_allocator<__gnu_cxx::malloc_allocator<int> >;
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/malloc_allocator.h>
#include <testsuite_allocator.h>
using __gnu_cxx::malloc_allocator;
void*
operator new(std::size_t n) throw(std::bad_alloc)
{
new_called = true;
requested = n;
return std::malloc(n);
}
void
operator delete(void *v) throw()
{
delete_called = true;
return std::free(v);
}
// These just help tracking down error messages.
bool test01()
{
typedef malloc_allocator<unsigned int> allocator_type;
return (__gnu_test::check_new<allocator_type, false>() == false);
}
int main()
{
return test01();
}
// { dg-do compile }
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/malloc_allocator.h>
template class __gnu_cxx::malloc_allocator<int>;
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/mt_allocator.h>
#include <testsuite_allocator.h>
using __gnu_cxx::__mt_alloc;
void*
operator new(std::size_t n) throw(std::bad_alloc)
{
new_called = true;
requested = n;
return std::malloc(n);
}
void
operator delete(void *v) throw()
{
delete_called = true;
return std::free(v);
}
bool test03()
{
typedef __mt_alloc<unsigned int> allocator_type;
return (__gnu_test::check_new<allocator_type, true>() == true);
}
int main()
{
return test03();
}
// { dg-do compile }
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/mt_allocator.h>
template class __gnu_cxx::__mt_alloc<int>;
// 2004-08-25 Benjamin Kosnik <bkoz@redhat.com>
//
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
#include <cassert>
#include <memory>
#include <ext/mt_allocator.h>
struct pod
{
int i;
};
// Tune characteristics.
// __common_pool_policy
void test01()
{
typedef pod value_type;
#ifdef __GTHREADS
typedef __gnu_cxx::__common_pool_policy<true> policy_type;
#else
typedef __gnu_cxx::__common_pool_policy<false> policy_type;
#endif
typedef __gnu_cxx::__mt_alloc<value_type, policy_type> allocator_type;
typedef __gnu_cxx::__pool_base::_Tune tune_type;
tune_type t_default;
tune_type t_opt(16, 5120, 32, 5120, 20, 10, false);
tune_type t_single(16, 5120, 32, 5120, 1, 10, false);
allocator_type a;
tune_type t1 = a._M_get_options();
assert(t1._M_align == t_default._M_align);
a._M_set_options(t_opt);
tune_type t2 = a._M_get_options();
assert(t1._M_align != t2._M_align);
allocator_type::pointer p1 = a.allocate(128);
allocator_type::pointer p2 = a.allocate(5128);
a._M_set_options(t_single);
t1 = a._M_get_options();
assert(t1._M_max_threads != t_single._M_max_threads);
assert(t1._M_max_threads == t_opt._M_max_threads);
a.deallocate(p1, 128);
a.deallocate(p2, 5128);
}
int main()
{
test01();
return 0;
}
// 2004-08-25 Benjamin Kosnik <bkoz@redhat.com>
//
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
#include <cassert>
#include <memory>
#include <ext/mt_allocator.h>
struct pod
{
int i;
};
// Tune characteristics.
// __per_type_pool_policy
void test02()
{
typedef pod value_type;
#ifdef __GTHREADS
typedef __gnu_cxx::__per_type_pool_policy<value_type, true> policy_type;
#else
typedef __gnu_cxx::__per_type_pool_policy<value_type, false> policy_type;
#endif
typedef __gnu_cxx::__mt_alloc<value_type, policy_type> allocator_type;
typedef __gnu_cxx::__pool_base::_Tune tune_type;
tune_type t_default;
tune_type t_opt(16, 5120, 32, 5120, 20, 10, false);
tune_type t_single(16, 5120, 32, 5120, 1, 10, false);
allocator_type a;
tune_type t1 = a._M_get_options();
assert(t1._M_align == t_default._M_align);
a._M_set_options(t_opt);
tune_type t2 = a._M_get_options();
assert(t1._M_align != t2._M_align);
allocator_type::pointer p1 = a.allocate(128);
allocator_type::pointer p2 = a.allocate(5128);
a._M_set_options(t_single);
t1 = a._M_get_options();
assert(t1._M_max_threads != t_single._M_max_threads);
assert(t1._M_max_threads == t_opt._M_max_threads);
a.deallocate(p1, 128);
a.deallocate(p2, 5128);
}
int main()
{
test02();
return 0;
}
// 2004-08-25 Benjamin Kosnik <bkoz@redhat.com>
//
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
#include <cassert>
#include <memory>
#include <ext/mt_allocator.h>
struct pod
{
int i;
};
// Tune characteristics, two of same type
template<typename _Tp>
struct test_policy
{ static bool per_type() { return true; } };
template<bool _Thread>
struct test_policy<__gnu_cxx::__common_pool_policy<_Thread> >
{
typedef __gnu_cxx::__common_pool_policy<_Thread> pool_type;
static bool per_type() { return false; }
};
// Tune characteristics, two of different types
template<typename _Tp, typename _Cp>
void test03()
{
typedef __gnu_cxx::__pool_base::_Tune tune_type;
typedef _Tp value_type;
typedef _Cp policy_type;
typedef __gnu_cxx::__mt_alloc<value_type, policy_type> allocator_type;
tune_type t_default;
tune_type t_opt(16, 5120, 32, 5120, 20, 10, false);
tune_type t_single(16, 5120, 32, 5120, 1, 10, false);
// First instances assured.
allocator_type a;
tune_type t1 = a._M_get_options();
tune_type t2;
if (test_policy<policy_type>::per_type())
{
assert(t1._M_align == t_default._M_align);
a._M_set_options(t_opt);
t2 = a._M_get_options();
assert(t1._M_align != t2._M_align);
}
else
t2 = t1;
// Lock tune settings.
typename allocator_type::pointer p1 = a.allocate(128);
allocator_type a2;
tune_type t3 = a2._M_get_options();
tune_type t4;
assert(t3._M_max_threads == t2._M_max_threads);
typename allocator_type::pointer p2 = a2.allocate(5128);
a2._M_set_options(t_single);
t4 = a2._M_get_options();
assert(t4._M_max_threads != t_single._M_max_threads);
assert(t4._M_max_threads == t3._M_max_threads);
a.deallocate(p1, 128);
a2.deallocate(p2, 5128);
}
int main()
{
#ifdef __GTHREADS
test03<int, __gnu_cxx::__per_type_pool_policy<int, true> >();
test03<int, __gnu_cxx::__common_pool_policy<true> >();
#endif
test03<int, __gnu_cxx::__common_pool_policy<false> >();
test03<int, __gnu_cxx::__per_type_pool_policy<int, false> >();
return 0;
}
// 2004-08-25 Benjamin Kosnik <bkoz@redhat.com>
//
// Copyright (C) 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
#include <cassert>
#include <memory>
#include <ext/mt_allocator.h>
struct pod
{
int i;
};
// Tune characteristics, two of same type
template<typename _Tp>
struct test_policy
{ static bool per_type() { return true; } };
template<bool _Thread>
struct test_policy<__gnu_cxx::__common_pool_policy<_Thread> >
{
typedef __gnu_cxx::__common_pool_policy<_Thread> pool_type;
static bool per_type() { return false; }
};
struct pod2
{
int i;
int j;
};
// Tune characteristics, two of different instantiations
template<typename _Tp, typename _Cp>
void test04()
{
typedef __gnu_cxx::__pool_base::_Tune tune_type;
typedef _Tp value_type;
typedef _Cp policy_type;
typedef __gnu_cxx::__mt_alloc<value_type, policy_type> allocator_type;
tune_type t_default;
tune_type t_opt(16, 5120, 32, 5120, 20, 10, false);
tune_type t_single(16, 5120, 32, 5120, 1, 10, false);
allocator_type a;
tune_type t1 = a._M_get_options();
tune_type t2;
if (test_policy<policy_type>::per_type())
{
assert(t1._M_align == t_default._M_align);
a._M_set_options(t_opt);
t2 = a._M_get_options();
assert(t1._M_align != t2._M_align);
}
else
t2 = t1;
// Lock tune settings.
typename allocator_type::pointer p1 = a.allocate(128);
// First instance of local type assured.
typedef pod2 value2_type;
typedef typename allocator_type::template rebind<value2_type>::other rebind_type;
rebind_type a2;
tune_type t3 = a2._M_get_options();
tune_type t4;
// Both policy_type and rebind_type::policy_type have same characteristics.
if (test_policy<policy_type>::per_type())
{
assert(t3._M_align == t_default._M_align);
a2._M_set_options(t_opt);
t4 = a2._M_get_options();
assert(t3._M_align != t4._M_align);
t3 = t4;
}
else
assert(t3._M_max_threads == t2._M_max_threads);
typename rebind_type::pointer p2 = a2.allocate(5128);
a2._M_set_options(t_single);
t4 = a2._M_get_options();
assert(t4._M_max_threads != t_single._M_max_threads);
assert(t4._M_max_threads == t3._M_max_threads);
a.deallocate(p1, 128);
a2.deallocate(p2, 5128);
}
int main()
{
#ifdef __GTHREADS
test04<float, __gnu_cxx::__common_pool_policy<true> >();
test04<double, __gnu_cxx::__per_type_pool_policy<double, true> >();
#endif
test04<float, __gnu_cxx::__common_pool_policy<false> >();
test04<double, __gnu_cxx::__per_type_pool_policy<double, false> >();
return 0;
}
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/pool_allocator.h>
#include <testsuite_allocator.h>
using __gnu_cxx::__pool_alloc;
void*
operator new(std::size_t n) throw(std::bad_alloc)
{
new_called = true;
requested = n;
return std::malloc(n);
}
void
operator delete(void *v) throw()
{
delete_called = true;
return std::free(v);
}
bool test03()
{
typedef __pool_alloc<unsigned int> allocator_type;
return (__gnu_test::check_new<allocator_type, true>() == true);
}
int main()
{
return test03();
}
// { dg-do compile }
// 2001-11-25 Phil Edwards <pme@gcc.gnu.org>
//
// Copyright (C) 2001, 2003, 2004 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING. If not, write to the Free
// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
// USA.
// 20.4.1.1 allocator members
#include <cstdlib>
#include <ext/pool_allocator.h>
template class __gnu_cxx::__pool_alloc<int>;
...@@ -38,6 +38,13 @@ ...@@ -38,6 +38,13 @@
#include <cstddef> #include <cstddef>
#include <limits> #include <limits>
namespace
{
bool new_called = false;
bool delete_called = false;
std::size_t requested = 0;
};
namespace __gnu_test namespace __gnu_test
{ {
class allocation_tracker class allocation_tracker
...@@ -170,9 +177,24 @@ namespace __gnu_test ...@@ -170,9 +177,24 @@ namespace __gnu_test
operator!=(const tracker_alloc<T1>&, const tracker_alloc<T2>&) throw() operator!=(const tracker_alloc<T1>&, const tracker_alloc<T2>&) throw()
{ return false; } { return false; }
bool bool
check_construct_destroy(const char* tag, int expected_c, int expected_d); check_construct_destroy(const char* tag, int expected_c, int expected_d);
template<typename Alloc, bool uses_global_new_and_delete>
bool check_new()
{
bool test __attribute__((unused)) = true;
Alloc a;
typename Alloc::pointer p = a.allocate(10);
if (uses_global_new_and_delete)
test &= ( requested >= (10 * 15 * sizeof(long)) );
test &= ( new_called == uses_global_new_and_delete );
a.deallocate(p, 10);
test &= ( delete_called == uses_global_new_and_delete );
return test;
}
}; // namespace __gnu_test }; // namespace __gnu_test
#endif // _GLIBCXX_TESTSUITE_ALLOCATOR_H #endif // _GLIBCXX_TESTSUITE_ALLOCATOR_H
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment