Re: Swap over network

Jose Orlando Pereira (mesjop@di.uminho.pt)
Tue, 29 Apr 1997 18:08:00 +0100


Pavel Machek wrote:
>
> I looked. Well, I did not look too much. I'm interested in just one thing:
> How do you solve deadlocks?
>
> Imagine that there's not enough (physical) memory to receive packet. There's
> enough swap, but storm of packets just ate all physical memory. This is
> situation when deadlock is near: You still have to send/receive ARPs, and you
> must be able to emit swap-out packet(s) and receive acknowledges.
>

Yes, we also stumbled across that one. ;-) And the answer was forcing
processes marked as "unswappable" (e.g. the swapper and init) to have
their priority boosted to GFP_ATOMIC when requesting free pages. This
is still an optimistic solution, as the kernel can run out of free
pages, but we saw that by increasing the MIN_FREE_PAGES by 50%
we have never seen deadlocks again. Just to be safe, we increased
it a little further.

The tests that it stood up were *really*evil*! ;-) We made a machine
with 16Mb of memory fill up to 40Mb and then we called swapoff.
This causes severe memory shortage and process killing and we had
no problems with deadlocks again.

> :-). You would have to pagelock whole process with its libraries.
> And libc is much too big, these days.

But the absolute minimum of libc required to page out some pages
is quite small and with the above mentioned patch it fits
in the MIN_FREE_PAGES. Just be optimistic and tune it
as necessary! ;-)

> Hmm - I think that it is ok to assume that swapper can not fail.
> My current problem now is deadlock described above. If we solve that,
> linux will have nice, working swapping over network.

Hmm... Are you sure? What if the server gets unplugged and the kernel
requests one page from it? Won't it hang for ever?

-- 
Jose Orlando Pereira
* jop@di.uminho.pt * http://gsd.di.uminho.pt/~jop *