Re: [PATCH v5 4/8] mm: Add write-protect and clean utilities for address space ranges

From: Thomas HellstrÃm (VMware)
Date: Wed Oct 16 2019 - 02:42:24 EST


On 10/10/19 4:17 PM, Peter Zijlstra wrote:
On Thu, Oct 10, 2019 at 03:24:47PM +0200, Thomas HellstrÃm (VMware) wrote:
On 10/10/19 3:05 PM, Peter Zijlstra wrote:
On Thu, Oct 10, 2019 at 02:43:10PM +0200, Thomas HellstrÃm (VMware) wrote:
+/**
+ * wp_shared_mapping_range - Write-protect all ptes in an address space range
+ * @mapping: The address_space we want to write protect
+ * @first_index: The first page offset in the range
+ * @nr: Number of incremental page offsets to cover
+ *
+ * Note: This function currently skips transhuge page-table entries, since
+ * it's intended for dirty-tracking on the PTE level. It will warn on
+ * encountering transhuge write-enabled entries, though, and can easily be
+ * extended to handle them as well.
+ *
+ * Return: The number of ptes actually write-protected. Note that
+ * already write-protected ptes are not counted.
+ */
+unsigned long wp_shared_mapping_range(struct address_space *mapping,
+ pgoff_t first_index, pgoff_t nr)
+{
+ struct wp_walk wpwalk = { .total = 0 };
+
+ i_mmap_lock_read(mapping);
+ WARN_ON(walk_page_mapping(mapping, first_index, nr, &wp_walk_ops,
+ &wpwalk));
+ i_mmap_unlock_read(mapping);
+
+ return wpwalk.total;
+}
That's a read lock, this means there's concurrency to self. What happens
if someone does two concurrent wp_shared_mapping_range() on the same
mapping?

The thing is, because of pte_wrprotect() the iteration that starts last
will see a smaller pte_write range, if it completes first and does
flush_tlb_range(), it will only flush a partial range.

This is exactly what {inc,dec}_tlb_flush_pending() is for, but you're
not using mm_tlb_flush_nested() to detect the situation and do a bigger
flush.

Or if you're not needing that, then I'm missing why.
Good catch. Thanks,

Yes the read lock is not intended to protect against concurrent users but to
protect the vmas from disappearing under us. Since it fundamentally makes no
sense having two concurrent threads picking up dirty ptes on the same
address_space range we have an external range-based lock to protect against
that.
Nothing mandates/verifies the function you expose is used exclusively.
Therefore you cannot make assumptions on that range lock your user has.

However, that external lock doesn't protect other code from concurrently
modifying ptes and having the mm's tlb_flush_pending increased, so I guess
we unconditionally need to test for that and do a full range flush if
necessary?
Yes, something like:

if (mm_tlb_flush_nested(mm))
flush_tlb_range(walk->vma, walk->vma->vm_start, walk->vma->vm_end);
else if (wpwalk->tlbflush_end > wpwalk->tlbflush_start)
flush_tlb_range(walk->vma, wpwalk->tlbflush_start, wpwalk->tlbflush_end);

Hi, Peter,

I've updated the patch to incorporate something similar to the above. Since you've looked at the patch, any chance of an R-B?

Thanks,

Thomas