Re: Hard Hang with __alloc_pages: 0-order allocation failed (gfp=0x20/1) - Not out of memory

From: Roger Larsson
Date: Wed May 26 2004 - 15:06:10 EST


Modifying the header so you find it. And write a better summary...

* When you have little memory left __alloc_pages wont allow
allocations from interrupt context (even if PF_MEMALLOC)

if (current->flags & PF_MEMALLOC && !in_interrupt()) {

* But in the callchain there is a loop in __kmem_cache_alloc when running SMP

> static inline void * __kmem_cache_alloc (kmem_cache_t *cachep, int flags)
> {
> unsigned long save_flags;
> void* objp;
>
> kmem_cache_alloc_head(cachep, flags);
> try_again:
> local_irq_save(save_flags);
>
> #ifdef CONFIG_SMP
> {
> cpucache_t *cc = cc_data(cachep);
>
> if (cc) {
> if (cc->avail) {
> STATS_INC_ALLOCHIT(cachep);
> objp = cc_entry(cc)[--cc->avail];
> } else {
> STATS_INC_ALLOCMISS(cachep);
> objp = kmem_cache_alloc_batch(cachep,cc,flags);
> if (!objp)
> goto alloc_new_slab_nolock;
>
> - - - snip - - -
> alloc_new_slab_nolock:
> #endif
> local_irq_restore(save_flags);
> if (kmem_cache_grow(cachep, flags))
> /* Someone may have stolen our objs. Doesn't matter, we'll
> * just come back here again.
> */
> goto try_again;
>
> But kmem_cache_grow will return failed...
> -> kmem_getpages -> _get_free_pages -> alloc_pages

But alloc_pages WILL fail every time... => infinite loop in interrupt
context... It probably should not try again in this case...

if (!in_interrupt())
goto try_again;

Or it should check if GFP_WAIT is set... A _busy_ wait is also a wait...

/RogerL

--
Roger Larsson
Skellefteå
Sweden
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/