Re: Kernel 3.0: Instant kernel crash when mounting CIFS (also crasheswith linux-3.1-rc2

From: Steve French
Date: Thu Aug 18 2011 - 15:02:19 EST


On Thu, Aug 18, 2011 at 1:54 PM, Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> wrote:
>
>
> On Thu, 18 Aug 2011, Jeff Layton wrote:
>
>> On Thu, 18 Aug 2011 14:18:15 -0400 (EDT)
>> Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> wrote:
>>
>>>
>>
>> Like I said, read performance with cifs.ko just plain sucks currently.
>>
>> Don't look for cifs.ko to achieve anywhere near NFS' performance unless
>> you jump through some very odd hoops (like multithreading your workload
>> in userspace, etc). cifs.ko just doesn't do a good job of keeping the
>> pipe "stuffed" as most calls are handled synchronously. A single task
>> can only handle one call on the wire in most cases. The exception here
>> is writes, but that just recently changed...
>>
>> Reads are done using relatively small buffers and then copied to
>> pagecache. Part of what I'm working on will be to allow for much larger
>> reads directly into the pagecache. That should also help performance
>> significantly.

As Jeff notes, large file sequential read of one file at a time is going
to be cifs kernel client's worst case. An earlier cifs async patchset
(prototype) showed dramatic improvements so it is not
inherent in the protocol and I am looking forward to seeing
Jeff's patches for this.

With multiple processes reading/writing cifs does pretty well
(since the worst case serialization is in cifs_readpages,
coupled with the relatively small buffer size on reads), as long as
you are accessing various files at one time. I have seen
cifs beat nfs on broader tests such as dbench when lots of activity on the
wire from multiple processes - but meant to do a dbench run
on recent kernels to compare. Would be interesting to compare
dbench now (with e.g. more than 30 processes) on nfs mount to
cifs mount.


--
Thanks,

Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/