Re: RFC [PATCH v4 2/7] Enable balloon drivers to report inflated memory

From: Alexander Atanasov
Date: Mon Oct 10 2022 - 03:31:23 EST


Hello,

On 10.10.22 9:18, Nadav Amit wrote:
On Oct 7, 2022, at 3:58 AM, Alexander Atanasov <alexander.atanasov@xxxxxxxxxxxxx> wrote:

On 7.10.22 0:07, Nadav Amit wrote:

I was looking through the series and I did not see actual users of the
notifier. Usually, it is not great to build an API without users.


You are right. I hope to get some feedback/interest from potential users that i mentioned in the cover letter. I will probably split the notifier
in separate series. To make it usefull it will require more changes.
See bellow more about them.
Fair, but this is something that is more suitable for RFC. Otherwise, more
likely than not - your patches would go in as is.

Yes, i will remove the notifier and resend both as RFC. I think that every patch is an RFC and RFC tag is used for more general changes that could affect unexpected areas, change functionality, change design and in general can lead to bigger impact. In the case with this it adds functionality that is missing and it could hardly affect anything else.
In essence it provides information that you can not get without it.
But i will take your advice and push everything thru RFC from now on.

Jus keep the version numbers as you had before. That’s fine and better to
prevent confusion.

Sure, i will.

[snip]
+
+static int balloon_notify(unsigned long val)
+{
+ return srcu_notifier_call_chain(&balloon_chain, val, NULL);
Since you know the inflated_kb value here, why not to use it as an argument
to the callback? I think casting to (void *) and back is best. But you can
also provide pointer to the value. Doesn’t it sound better than having
potentially different notifiers reading different values?

My current idea is to have a struct with current and previous value,
may be change in percents. The actual value does not matter to anyone
but the size of change does. When a user gets notified it can act upon
the change - if it is small it can ignore it , if it is above some threshold it can act - if it makes sense for some receiver is can accumulate changes from several notification. Other option/addition is to have si_meminfo_current(..) and totalram_pages_current(..) that return values adjusted with the balloon values.

Going further - there are few places that calculate something based on available memory that do not have sysfs/proc interface for setting limits. Most of them work in percents so they can be converted to do calculations when they get notification.

The one that have interface for configuration but use memory values can be handled in two ways - convert to use percents of what is available or extend the notifier to notify userspace which in turn to do calculations and update configuration.
I really need to see code to fully understand what you have in mind.

Sure - you can check some of the users with git grep totalram_pages - shows self explanatory results of usage like:
fs/f2fs/node.c:bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type) - calculations in percents - one good example
fs/ceph/super.h: congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10);
fs/fuse/inode.c: *limit = ((totalram_pages() << PAGE_SHIFT) >> 13) / 392;
fs/nfs/write.c: nfs_congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10);
fs/nfsd/nfscache.c: unsigned long low_pages = totalram_pages() - totalhigh_pages()
mm/oom_kill.c: oc->totalpages = totalram_pages() + total_swap_pages;


So all balloon drivers give large amount of RAM on boot , then inflate the balloon. But this places have already been initiallized and they know that the system have given amount of totalram which is not true the moment they start to operate. the result is that too much space gets used and it degrades the userspace performance.
example - fs/eventpoll.c:static int __init eventpoll_init(void) - 4% of ram for eventpool - when you inflate half of the ram it becomes 8% of the ram - do you really need 8% of your ram to be used for eventpool?

To solve this you need to register and when notified update - cache size, limits and for whatever is the calculated amount of memory used.

Hmm.. Not sure about all of that. Most balloon drivers are manually managed,
and call adjust_managed_page_count(), and tas a result might want to redo
all the calculations that are based on totalram_pages().

Yes, i will say that it looks like mixed manual - for large changes and automatic for small changes. VMWare and HyperV have automatic and manual/not sure exactly what you can change on a running VM but i guess you can/ - Virtio is only manual. I do not know about dlpar / xen.

Scenario is like this start a VM with 4GB ram, reduce to 2GB with balloon - vm can be upgraded.

All we are talking about relates to memory hotplug/unplug /where unplug is close to nonexistant hence the balloons are used./

All values should be recalculated on memory hotplug too, so you can use the newly available RAM.

RAM is the most valuable resource of all so i consider using it optimally to be of a great importance.

Side-note: That’s not the case for VMware balloon. I actually considered
calling adjust_managed_page_count() just to conform with other balloon
drivers. But since we use totalram_pages() to communicate to the hypervisor
the total-ram, this would create endless (and wrong) feedback loop. I am not
claiming it is not possible to VMware balloon driver to call
adjust_managed_page_count(), but the chances are that it would create more
harm than good.

Virtio does both - depending on the deflate on OOM option. I suggested already to unify all drivers to inflate the used memory as it seems more logical to me since no body expects the totalram_pages() to change but the current state is that both ways are accepted and if changed can break existing users.
See discussion here https://lore.kernel.org/lkml/20220809095358.2203355-1-alexander.atanasov@xxxxxxxxxxxxx/.


Back to the matter at hand. It seems that you wish that the notifiers would
be called following any changes that would be reflected in totalram_pages().
So, doesn't it make more sense to call it from adjust_managed_page_count() ?

It will hurt performance - all drivers work page by page , i.e. they update by +1/-1 and they do so under locks which as you already noted can lead to bad things. The notifier will accumulate the change and let its user know how much changed, so the can decide if they have to recalculate - it even can do so async in order to not disturb the drivers.

The difference is here:

mm/zswap.c: return totalram_pages() * zswap_max_pool_percent / 100 <
mm/zswap.c: return totalram_pages() * zswap_accept_thr_percent / 100
uses percents and you can recalculate easy with

+static inline unsigned long totalram_pages_current(void)
+{
+ unsigned long inflated = 0;
+#ifdef CONFIG_MEMORY_BALLOON
+ extern atomic_long_t mem_balloon_inflated_free_kb;
+ inflated = atomic_long_read(&mem_balloon_inflated_free_kb);
+ inflated >>= (PAGE_SHIFT - 10);
+#endif
+ return (unsigned long)atomic_long_read(&_totalram_pages) - inflated;
+}

And you are good when you switch to _current version - si_meminfo_current is alike .

On init (probably) all use some kind of fractions to calculate but when there is a set value via /proc/sys/net/ipv4/tcp_wmem for example it is just a value and you can not recalculate it. And here, please, share your ideas how to solve this.

I don’t get all of that. Now that you provided some more explanations, it
sounds that what you want is adjust_managed_page_count(), which we already
have and affects the output of totalram_pages(). Therefore, totalram_pages()
anyhow accounts for the balloon memory (excluding VMware’s). So why do we
need to take mem_balloon_inflated_free_kb into account?
Ok, you have this:
/ totalram
|----used----|b1|----free------|b2|

drivers can inflate both b1 and b2 - b1 free gets smaller, b2 totalram pages get smaller. so when you need totalram_pages() to do a calculation you need to adjust it with the pages that are inflated in free/used (b1). VMWare is not exception , Virtio does the same.
And according to to mst and davidh it is okay like this.
So i am proposing a way to handle both cases.

Sounds to me that all you want is some notifier to be called from
adjust_managed_page_count(). What am I missing?

Notifier will act as an accumulator to report size of change and it will make things easier for the drivers and users wrt locking.
Notifier is similar to the memory hotplug notifier.

--
Regards,
Alexander Atanasov