Re: libata FUA revisited

From: Ric Wheeler
Date: Thu Feb 22 2007 - 17:46:27 EST

Tejun Heo wrote:
Jens Axboe wrote:
On Wed, Feb 21 2007, Tejun Heo wrote:
[cc'ing Ric, Hannes and Dongjun, Hello. Feel free to drag other people in.]

Robert Hancock wrote:
Jens Axboe wrote:
But we can't really change that, since you need the cache flushed before
issuing the FUA write. I've been advocating for an ordered bit for
years, so that we could just do:


normal operation -> barrier issued -> write barrier FUA+ORDERED
-> normal operation resumes

So we don't have to serialize everything both at the block and device
level. I would have made FUA imply this already, but apparently it's not
what MS wanted FUA for, so... The current implementations take the FUA
bit (or WRITE FUA) as a hint to boost it to head of queue, so you are
almost certainly going to jump ahead of already queued writes. Which we
of course really do not.
Yeah, I think if we have tagged write command and flush tagged (or
barrier tagged) things can be pretty efficient. Again, I'm much more
comfortable with separate opcodes for those rather than bits changing
the behavior.
ORDERED+FUA NCQ would still be preferable to an NCQ enabled flush
command, though.

I think we're talking about two different things here.

1. The barrier write (FUA write) combined with flush. I think it would
help improving the performance but I think issuing two commands
shouldn't be too slower than issuing one combined command unless it
causes extra physical activity (moving head, etc...).

2. FLUSH currently flushes all writes. If we can mark certain commands
requiring ordering, we can selectively flush or order necessary writes.
(No need to flush 16M buffer all over the disk when only journal needs

We can certainly (given time to play in the lab!) try to measure this in with a micro-benchmark (with an analyzer or with block trace?).

A normal flush command in my old tests seemed to be in the 20 ms range (mixed in with and occasional "freebie" cache flush which returns in 50 usecs or so - cache must be empty).

Another idea Dongjun talked about while drinking in LSF was ranged
flush. Not as flexible/efficient as the previous option but much less
intrusive and should help quite a bit, I think.
But that requires extensive tracking, I'm not so sure the implementation
of that for barriers would be very clean. It'd probably be good for
fsync, though.

I was mostly thinking about journal area. Using it for other purposes
would incur a lot of complexity. :-(

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at