Re: is hibernation usable?

From: Luigi Semenzato
Date: Tue Oct 22 2019 - 17:26:59 EST


Thank you for the quick reply!

On Tue, Oct 22, 2019 at 1:57 PM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote:
>
> On Tue, Oct 22, 2019 at 10:09 PM Luigi Semenzato <semenzato@xxxxxxxxxx> wrote:
> >
> > Following a thread in linux-pm
> > (https://marc.info/?l=linux-mm&m=157012300901871) I have some issues
> > that may be of general interest.
> >
> > 1. To the best of my knowledge, Linux hibernation is guaranteed to
> > fail if more than 1/2 of total RAM is in use (for instance, by
> > anonymous pages). My knowledge is based on evidence, experiments,
> > code inspection, the thread above, and a comment in
> > Documentation/swsusp.txt, copied here:
>
> So I use it on a regular basis (i.e. every day) on a system that often
> has over 50% or RAM in use and it all works.
>
> I also know about other people using it on a regular basis.
>
> For all of these users, it is usable.
>
> > "Instead, we load the image into unused memory and then atomically
> > copy it back to it original location. This implies, of course, a
> > maximum image size of half the amount of memory."
>
> That isn't right any more. An image that is loaded during resume can,
> in fact, be larger than 50% of RAM. An image that is created during
> hibernation, however, cannot.

Sorry, I don't understand this. Are you saying that, for instance,
you can resume a 30 GB image on a 32 GB device, but that image could
only have been created on a 64 GB device?

> > 2. There's no simple/general workaround. Rafael suggested on the
> > thread "Whatever doesn't fit into 50% of RAM needs to be swapped out
> > before hibernation". This is a good suggestion: I am actually close
> > to achieving this using memcgroups, but it's a fair amount of work,
> > and a fairly special case. Not everybody uses memcgroups, and I don't
> > know of other reliable ways of forcing swap from user level.
>
> I don't need to do anything like that.

Again, I don't understand. Why did you make that suggestion then?

> hibernate_preallocate_memory() manages to free a sufficient amount of
> memory on my system every time.

Unfortunately this doesn't work for me. I may have described a simple
experiment: on a 4GB device, create two large processes like this:

dd if=/dev/zero bs=1100M count=1 | sleep infinity &
dd if=/dev/zero bs=1100M count=1 | sleep infinity &

so that more than 50% of TotalMem is used for anonymous pages. Then
echo disk > /sys/power/state fails with ENOMEM.

Is this supposed to work? Maybe I am doing something wrong?
Hibernation works before I create the dd processes. After I force
some of those pages to a separate swap device, hibernation works too,
so those pages aren't mlocked or anything.

> > 3. A feature that works only when 1/2 of total RAM can be allocated
> > is, in my opinion, not usable, except possibly under special
> > circumstances, such as mine. Most of the available articles and
> > documentation do not mention this important fact (but for the excerpt
> > I mentioned, which is not in a prominent position).
>
> It can be used with over 1/2 of RAM allocated and that is quite easy
> to demonstrate.
>
> Honestly, I'm not sure what your problem is really.

I apologize if I am doing something stupid and I should know better
before I waste other people's time. I have been trying to explain
these issues as best as I can. I have a reproducible failure. I'll
be happy to provide any additional detail.

>
> > Two questions then:
> >
> > A. Should the documentation be changed to reflect this fact more
> > clearly? I feel that the current situation is a disservice to the
> > user community.
>
> Propose changes.

Sure, after we resolve the above questions.

> > B. Would it be worthwhile to improve the hibernation code to remove
> > this limitation? Is this of interest to anybody (other than me)?
>
> Again, propose specific changes.