On 04/02, Konstantin Khlebnikov wrote:
Currently kernel does not account read-only private mappings into memory commitment.
But these mappings can be force-COW-ed in get_user_pages().
Heh. tail -n3 Documentation/vm/overcommit-accounting
may be you should update it then.
Can't really comment the patch, this is not my area. Still,
+ down_write(&mm->mmap_sem);
+ *pvma = vma = find_vma(mm, addr);
+ if (vma&& vma->vm_start<= addr) {
+ ret = vma->vm_end - addr;
+ if ((vma->vm_flags& (VM_ACCOUNT | VM_NORESERVE | VM_SHARED |
+ VM_HUGETLB | VM_MAYWRITE)) == VM_MAYWRITE) {
+ if (!security_vm_enough_memory_mm(mm, vma_pages(vma)))
Oooooh, the whole vma. Say, gdb installs the single breakpoint into
the huge .text mapping...
I am not sure, but probably you want to check at least VM_IO/PFNMAP
as well. We do not want to charge this memory and retry with FOLL_FORCE
before vm_ops->access(). Say, /dev/mem
Hmm. OTOH, if I am right then mprotect_fixup() should be fixed??
We drop ->mmap_sem... Say, the task does mremap() in between and
len == 2 * PAGE_SIZE. Then, for example, copy_to_user_page() can
write to the same page twice. Perhaps not a problem in practice,
I dunno.