Re: [PATCH 0/5] Volatile Ranges (v12) & LSF-MM discussion fodder

From: Johannes Weiner
Date: Wed Apr 02 2014 - 12:31:00 EST


On Tue, Apr 01, 2014 at 09:03:57PM -0700, John Stultz wrote:
> On 04/01/2014 02:21 PM, Johannes Weiner wrote:
> > [ I tried to bring this up during LSFMM but it got drowned out.
> > Trying again :) ]
> >
> > On Fri, Mar 21, 2014 at 02:17:30PM -0700, John Stultz wrote:
> >> Optimistic method:
> >> 1) Userland marks a large range of data as volatile
> >> 2) Userland continues to access the data as it needs.
> >> 3) If userland accesses a page that has been purged, the kernel will
> >> send a SIGBUS
> >> 4) Userspace can trap the SIGBUS, mark the affected pages as
> >> non-volatile, and refill the data as needed before continuing on
> > As far as I understand, if a pointer to volatile memory makes it into
> > a syscall and the fault is trapped in kernel space, there won't be a
> > SIGBUS, the syscall will just return -EFAULT.
> >
> > Handling this would mean annotating every syscall invocation to check
> > for -EFAULT, refill the data, and then restart the syscall. This is
> > complicated even before taking external libraries into account, which
> > may not propagate syscall returns properly or may not be reentrant at
> > the necessary granularity.
> >
> > Another option is to never pass volatile memory pointers into the
> > kernel, but that too means that knowledge of volatility has to travel
> > alongside the pointers, which will either result in more complexity
> > throughout the application or severely limited scope of volatile
> > memory usage.
> >
> > Either way, optimistic volatile pointers are nowhere near as
> > transparent to the application as the above description suggests,
> > which makes this usecase not very interesting, IMO. If we can support
> > it at little cost, why not, but I don't think we should complicate the
> > common usecases to support this one.
>
> So yea, thanks again for all the feedback at LSF-MM! I'm trying to get
> things integrated for a v13 here shortly (although with visitors in town
> this week it may not happen until next week).
>
>
> So, maybe its best to ignore the fact that folks want to do semi-crazy
> user-space faulting via SIGBUS. At least to start with. Lets look at the
> semantic for the "normal" mark volatile, never touch the pages until you
> mark non-volatile - basically where accessing volatile pages is similar
> to a use-after-free bug.
>
> So, for the most part, I'd say the proposed SIGBUS semantics don't
> complicate things for this basic use-case, at least when compared with
> things like zero-fill. If an applications accidentally accessed a
> purged volatile page, I think SIGBUS is the right thing to do. They most
> likely immediately crash, but its better then them moving along with
> silent corruption because they're mucking with zero-filled pages.
>
> So between zero-fill and SIGBUS, I think SIGBUS makes the most sense. If
> you have a third option you're thinking of, I'd of course be interested
> in hearing it.

The reason I'm bringing this up again is because I see very little
solid usecases for a separate vrange() syscall once we have something
like MADV_FREE and MADV_REVIVE, which respectively clear the dirty
bits of a range of anon/tmpfs pages, and set them again and report if
any pages in the given range were purged on revival.

So between zero-fill and SIGBUS, I'd prefer the one which results in
the simpler user interface / fewer system calls.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/