Re: [PATCH stable 5.10] mm/memory_hotplug: extend offline_and_remove_memory() to handle more than one memory block

From: mawupeng
Date: Mon Jun 19 2023 - 03:54:05 EST




On 2023/6/19 15:41, David Hildenbrand wrote:
> On 19.06.23 09:22, mawupeng wrote:
>>
>>
>> On 2023/6/19 15:16, Greg KH wrote:
>>> On Mon, Jun 19, 2023 at 02:51:21PM +0800, Wupeng Ma wrote:
>>>> From: David Hildenbrand <david@xxxxxxxxxx>
>>>>
>>>> commit 8dc4bb58a146655eb057247d7c9d19e73928715b upstream.
>>>>
>>>> virtio-mem soon wants to use offline_and_remove_memory() memory that
>>>> exceeds a single Linux memory block (memory_block_size_bytes()). Let's
>>>> remove that restriction.
>>>>
>>>> Let's remember the old state and try to restore that if anything goes
>>>> wrong. While re-onlining can, in general, fail, it's highly unlikely to
>>>> happen (usually only when a notifier fails to allocate memory, and these
>>>> are rather rare).
>>>>
>>>> This will be used by virtio-mem to offline+remove memory ranges that are
>>>> bigger than a single memory block - for example, with a device block
>>>> size of 1 GiB (e.g., gigantic pages in the hypervisor) and a Linux memory
>>>> block size of 128MB.
>>>>
>>>> While we could compress the state into 2 bit, using 8 bit is much
>>>> easier.
>>>>
>>>> This handling is similar, but different to acpi_scan_try_to_offline():
>>>>
>>>> a) We don't try to offline twice. I am not sure if this CONFIG_MEMCG
>>>> optimization is still relevant - it should only apply to ZONE_NORMAL
>>>> (where we have no guarantees). If relevant, we can always add it.
>>>>
>>>> b) acpi_scan_try_to_offline() simply onlines all memory in case
>>>> something goes wrong. It doesn't restore previous online type. Let's do
>>>> that, so we won't overwrite what e.g., user space configured.
>>>>
>>>> Reviewed-by: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx>
>>>> Cc: "Michael S. Tsirkin" <mst@xxxxxxxxxx>
>>>> Cc: Jason Wang <jasowang@xxxxxxxxxx>
>>>> Cc: Pankaj Gupta <pankaj.gupta.linux@xxxxxxxxx>
>>>> Cc: Michal Hocko <mhocko@xxxxxxxxxx>
>>>> Cc: Oscar Salvador <osalvador@xxxxxxx>
>>>> Cc: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx>
>>>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>>>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
>>>> Link: https://lore.kernel.org/r/20201112133815.13332-28-david@xxxxxxxxxx
>>>> Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
>>>> Acked-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>>>> Signed-off-by: Ma Wupeng <mawupeng1@xxxxxxxxxx>
>>>> ---
>>>>   mm/memory_hotplug.c | 105 +++++++++++++++++++++++++++++++++++++-------
>>>>   1 file changed, 89 insertions(+), 16 deletions(-)
>>>>
>>>
>>> Why is this needed in 5.10.y?  Looks like a new feature to me, what
>>> problem does it solve there?
>>>
>>> thanks,
>>>
>>> greg k-h
>>
>> It do introduce a new feature. But at the same time, it fix a memleak introduced
>> in Commit 08b3acd7a68f ("mm/memory_hotplug: Introduce offline_and_remove_memory()"
>>
>> Our test find a memleak in init_memory_block, it is clear that mem is never
>> been released due to wrong refcount. Commit 08b3acd7a68f ("mm/memory_hotplug:
>> Introduce offline_and_remove_memory()") failed to dec refcount after
>> find_memory_block which fail to dec refcount to zero in remove memory
>> causing the leak.
>>
>> Commit 8dc4bb58a146 ("mm/memory_hotplug: extend offline_and_remove_memory()
>> to handle more than one memory block") introduce walk_memory_blocks to
>> replace find_memory_block which dec refcount by calling put_device after
>> find_memory_block_by_id. In the way, the memleak is fixed.
>>
>> Here is the simplified calltrace:
>>
>>    kmem_cache_alloc_trace+0x664/0xed0
>>    init_memory_block+0x8c/0x170
>>    create_memory_block_devices+0xa4/0x150
>>    add_memory_resource+0x188/0x530
>>    __add_memory+0x78/0x104
>>    add_memory+0x6c/0xb0
>>
>
> Makes sense to me. Of course, we could think about a simplified stable fix that only drops the ref.

Since the new patch does not introduce any kabi change, maybe we can merge this one?

However the changelog may lead to some confusion for other people just like the question
you asked.

>