Re: RFC [PATCH v4 2/7] Enable balloon drivers to report inflated memory

From: Alexander Atanasov
Date: Fri Oct 07 2022 - 06:59:00 EST


On 7.10.22 0:07, Nadav Amit wrote:
On Oct 6, 2022, at 12:34 AM, Alexander Atanasov <alexander.atanasov@xxxxxxxxxxxxx> wrote:

Hello,


On 5.10.22 20:25, Nadav Amit wrote:
On Oct 5, 2022, at 2:01 AM, Alexander Atanasov <alexander.atanasov@xxxxxxxxxxxxx> wrote:
Add counters to be updated by the balloon drivers.
Create balloon notifier to propagate changes.
I missed the other patches before (including this one). Sorry, but next
time, please cc me.

You are CCed in the cover letter since the version. I will add CC to you
in the individual patches if you want so.

Thanks.

Just to clarify - I am not attacking you. It’s more of me making an excuse
for not addressing some issues in earlier versions.

ok, i am glad that you did it now but i do not think you need to excuse.

I was looking through the series and I did not see actual users of the
notifier. Usually, it is not great to build an API without users.


You are right. I hope to get some feedback/interest from potential users that i mentioned in the cover letter. I will probably split the notifier
in separate series. To make it usefull it will require more changes.
See bellow more about them.

Fair, but this is something that is more suitable for RFC. Otherwise, more
likely than not - your patches would go in as is.

Yes, i will remove the notifier and resend both as RFC. I think that every patch is an RFC and RFC tag is used for more general changes that could affect unexpected areas, change functionality, change design and in general can lead to bigger impact. In the case with this it adds functionality that is missing and it could hardly affect anything else.
In essence it provides information that you can not get without it.
But i will take your advice and push everything thru RFC from now on.

[snip]
+
+static int balloon_notify(unsigned long val)
+{
+ return srcu_notifier_call_chain(&balloon_chain, val, NULL);
Since you know the inflated_kb value here, why not to use it as an argument
to the callback? I think casting to (void *) and back is best. But you can
also provide pointer to the value. Doesn’t it sound better than having
potentially different notifiers reading different values?

My current idea is to have a struct with current and previous value,
may be change in percents. The actual value does not matter to anyone
but the size of change does. When a user gets notified it can act upon
the change - if it is small it can ignore it , if it is above some threshold it can act - if it makes sense for some receiver is can accumulate changes from several notification. Other option/addition is to have si_meminfo_current(..) and totalram_pages_current(..) that return values adjusted with the balloon values.

Going further - there are few places that calculate something based on available memory that do not have sysfs/proc interface for setting limits. Most of them work in percents so they can be converted to do calculations when they get notification.

The one that have interface for configuration but use memory values can be handled in two ways - convert to use percents of what is available or extend the notifier to notify userspace which in turn to do calculations and update configuration.

I really need to see code to fully understand what you have in mind.

Sure - you can check some of the users with git grep totalram_pages - shows self explanatory results of usage like:
fs/f2fs/node.c:bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type) - calculations in percents - one good example
fs/ceph/super.h: congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10);
fs/fuse/inode.c: *limit = ((totalram_pages() << PAGE_SHIFT) >> 13) / 392;
fs/nfs/write.c: nfs_congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10);
fs/nfsd/nfscache.c: unsigned long low_pages = totalram_pages() - totalhigh_pages()
mm/oom_kill.c: oc->totalpages = totalram_pages() + total_swap_pages;


So all balloon drivers give large amount of RAM on boot , then inflate the balloon. But this places have already been initiallized and they know that the system have given amount of totalram which is not true the moment they start to operate. the result is that too much space gets used and it degrades the userspace performance.
example - fs/eventpoll.c:static int __init eventpoll_init(void) - 4% of ram for eventpool - when you inflate half of the ram it becomes 8% of the ram - do you really need 8% of your ram to be used for eventpool?

To solve this you need to register and when notified update - cache size, limits and for whatever is the calculated amount of memory used.

The difference is here:

mm/zswap.c: return totalram_pages() * zswap_max_pool_percent / 100 <
mm/zswap.c: return totalram_pages() * zswap_accept_thr_percent / 100
uses percents and you can recalculate easy with

+static inline unsigned long totalram_pages_current(void)
+{
+ unsigned long inflated = 0;
+#ifdef CONFIG_MEMORY_BALLOON
+ extern atomic_long_t mem_balloon_inflated_free_kb;
+ inflated = atomic_long_read(&mem_balloon_inflated_free_kb);
+ inflated >>= (PAGE_SHIFT - 10);
+#endif
+ return (unsigned long)atomic_long_read(&_totalram_pages) - inflated;
+}

And you are good when you switch to _current version - si_meminfo_current is alike .

On init (probably) all use some kind of fractions to calculate but when there is a set value via /proc/sys/net/ipv4/tcp_wmem for example it is just a value and you can not recalculate it. And here, please, share your ideas how to solve this.


Division, as you know, is not something that we really want to do very
frequently.

Yes, thats true but in the actual implementation there are a lot of ways to avoid it. But it is easier to explain with division.

Even if you do have to do a division you can limit the recalculations
struct balloon_notify {
unsigned long last_inflated_free_kb;
unsigned long last_inflated_free_kb;
unsigned long inflated_free_kb;
unsigned long inflated_used_kb;
}

So you can do it only if change is more then 1GB and you do nothing if change is 1MB.


Anyhow, without users (actual notifiers) it’s kind of hard to know how
reasonable it all is. For instance, is it balloon_notify() supposed to
prevent further balloon inflating/deflating until the notifier completes?

No, we must avoid that at any cost.

Accordingly, are callers to balloon_notify() expected to relinquish locks
before calling balloon_notify() to prevent deadlocks and high latency?

My goal is to avoid any possible impact on performance. Drivers are free to delay notifications if they get in the way. (I see that i need to move the notification after the semaphore in the vmw driver - i missed that - will fix in the next iterration.)
Deadlocks - depends on the users but a few to none will possibly have to deal with common locks.

I will need to see the next version to give better feedback. One more thing
that comes to mind though is whether saving the balloon size in multiple
places (both mem_balloon_inflated_total_kb and each balloon’s accounting) is
the right way. It does not sounds very clean.

Two other options is to move *all* the accounting to your new
mem_balloon_inflated_total_kb-like interface or expose some per-balloon
function to get the balloon size (indirect-function-call would likely have
some overhead though).

Anyhow, I am not crazy about having the same data replicated. Even from
reading the code stand-of-view it is not intuitive.

If such interface existed before the drivers it would ideally be like that all in one place. But keeping the internal (for the driver) representation which may differ from the system and the external (system) representation separate is a a good option. if a driver can convert and use only the system counters they can do so.

--
Regards,
Alexander Atanasov