Re: [PATCH 07/12] x86/virt/guest/xen: Remove use of pgd_list from the Xen guest code

From: David Vrabel
Date: Mon Jun 15 2015 - 06:30:46 EST


On 15/06/15 10:05, Ian Campbell wrote:
> On Sat, 2015-06-13 at 11:49 +0200, Ingo Molnar wrote:
>> xen_mm_pin_all()/unpin_all() are used to implement full guest instance
>> suspend/restore. It's a stop-all method that needs to iterate through
>> all allocated pgds in the system to fix them up for Xen's use.
>>
>> This code uses pgd_list, probably because it was an easy interface.
>>
>> But we want to remove the pgd_list, so convert the code over to walk
>> all tasks in the system. This is an equivalent method.

It is not equivalent because pgd_alloc() now populates entries in pgds
that are not visible to xen_mm_pin_all() (note how the original code
adds the pgd to the pgd_list in pgd_ctor() before calling
pgd_prepopulate_pmd()). These newly allocated page tables won't be
correctly converted on suspend/resume and the new process will die after
resume.

David

>>
>> (As I don't use Xen this is was only build tested.)
>
> In which case it seems extra important to copy the appropriate
> maintainers, which I've done here.
>
> Ian.
>
>>
>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
>> Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
>> Cc: Borislav Petkov <bp@xxxxxxxxx>
>> Cc: Brian Gerst <brgerst@xxxxxxxxx>
>> Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
>> Cc: H. Peter Anvin <hpa@xxxxxxxxx>
>> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
>> Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
>> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
>> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>> Cc: Waiman Long <Waiman.Long@xxxxxx>
>> Cc: linux-mm@xxxxxxxxx
>> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
>> ---
>> arch/x86/xen/mmu.c | 51 ++++++++++++++++++++++++++++++++++++++-------------
>> 1 file changed, 38 insertions(+), 13 deletions(-)
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index dd151b2045b0..70a3df5b0b54 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -853,15 +853,27 @@ static void xen_pgd_pin(struct mm_struct *mm)
>> */
>> void xen_mm_pin_all(void)
>> {
>> - struct page *page;
>> + struct task_struct *g, *p;
>>
>> - spin_lock(&pgd_lock);
>> + spin_lock(&pgd_lock); /* Implies rcu_read_lock() for the task list iteration: */
>>
>> - list_for_each_entry(page, &pgd_list, lru) {
>> - if (!PagePinned(page)) {
>> - __xen_pgd_pin(&init_mm, (pgd_t *)page_address(page));
>> - SetPageSavePinned(page);
>> + for_each_process_thread(g, p) {
>> + struct mm_struct *mm;
>> + struct page *page;
>> + pgd_t *pgd;
>> +
>> + task_lock(p);
>> + mm = p->mm;
>> + if (mm) {
>> + pgd = mm->pgd;
>> + page = virt_to_page(pgd);
>> +
>> + if (!PagePinned(page)) {
>> + __xen_pgd_pin(&init_mm, pgd);
>> + SetPageSavePinned(page);
>> + }
>> }
>> + task_unlock(p);
>> }
>>
>> spin_unlock(&pgd_lock);
>> @@ -967,19 +979,32 @@ static void xen_pgd_unpin(struct mm_struct *mm)
>> */
>> void xen_mm_unpin_all(void)
>> {
>> - struct page *page;
>> + struct task_struct *g, *p;
>>
>> - spin_lock(&pgd_lock);
>> + spin_lock(&pgd_lock); /* Implies rcu_read_lock() for the task list iteration: */
>>
>> - list_for_each_entry(page, &pgd_list, lru) {
>> - if (PageSavePinned(page)) {
>> - BUG_ON(!PagePinned(page));
>> - __xen_pgd_unpin(&init_mm, (pgd_t *)page_address(page));
>> - ClearPageSavePinned(page);
>> + for_each_process_thread(g, p) {
>> + struct mm_struct *mm;
>> + struct page *page;
>> + pgd_t *pgd;
>> +
>> + task_lock(p);
>> + mm = p->mm;
>> + if (mm) {
>> + pgd = mm->pgd;
>> + page = virt_to_page(pgd);
>> +
>> + if (PageSavePinned(page)) {
>> + BUG_ON(!PagePinned(page));
>> + __xen_pgd_unpin(&init_mm, pgd);
>> + ClearPageSavePinned(page);
>> + }
>> }
>> + task_unlock(p);
>> }
>>
>> spin_unlock(&pgd_lock);
>> + rcu_read_unlock();
>> }
>>
>> static void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next)
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/