On Tue, 9 May 2000, Gabriel Benhanokh wrote:
> i think i didn't explain well what i'm gonna do, and this cause some
> first i'm not trying to comprimise UNIX security for some performance gain.
> i'm not trying to say that DOS or any other funny system, was doing right by
> not implementing any security model.
> what i'm trying to do is as follows:
> we are writing a filesystem layer to allow more than one machine write to
> the same disk.
Definitely a useful feature... Not so sure about the implementation,
> we could have implement a brand new FS for this reason like the GFS folks
> are doing, but we are not ready for such a big project. so we are trying to
> hack into existing FS and manipuilate them to support this feature.
Personally, I'd try making mounts exclusive (i.e. preventing both machines
from mounting the same device at once), then adding a "failsafe" mount.
Then, when the two machines come up:
Machine A mounts the device, and exports the filesystem on it to the
Machine B tries to mount the device, fails, and mounts the failsafe
(the network share) instead.
That way, either machine can access the device when the other is off-line,
and they both have access simultaneously.
> since you can't just let one machine write into another machine disk, we are
> implemneting some kind of agents which comunicate over network, and let the
> local agent allocate the blocks needed for the file so the remote machine
> can write safly to the disk, knowing that no other process will ever get
> this blocks.
Hrm. A rather high-risk approach...
> so it is done like this- local agaent alloacte, and remote machine writes
> directly to the disk controler.
That implies very close communication between the two, which could become
a bottleneck; perhaps better to keep the low-level stuff on one machine at
> now no system call can allocate blocks in unix because of all the security
> issue you all brought, so i'm hacking into ext2fs to allocate the blocks for
> me. the problem is that when allocating a block it is filled with zeros and
> the block is marked DIRTY -> so it will endup being written to the disk.
> this is problem in my case for 3 reasons:
> 1) it will write to the DISK meaningless data, which means that the copying
> process will take double as much time
> 2) since we are working with huge files( GBs ) it will wipe out all other
> processes buffer cache resulting with an overall slowdown for the system
Why are you working with huge files? Is this a general purpose shared FS,
or aimed at some specific task?
> 3) and most importent, it might over write blocks which were written to the
> disk directly by the remote machine
> so what i'm trying to do is eliminate all those problems by "cheating" VFS
> to think that the blocks were writen to the disk already( marking the
> buffer_head as clean and releasing it ).
> i will keep the file size at zero so process can read from it, and from time
> to time update the size when i get notificied by the remote machine.
In other words, the two machines have a slightly different, inconsistent
view of the same disk...
> the file must be kept in a read-only state until it was all written, so not
> to colide with local processes.
> so again, no user process can ever get to read those blocks until the remote
> machine over write them with new data.
> i hope this clear things,
It does explain what you're aiming at; hacking the innards of ext2fs to
support it seems like a very complex way to do it, though...
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Mon May 15 2000 - 21:00:14 EST