o Reset the timeout period
int ioctl(int fd, int FIFREEZE_RESET_TIMEOUT, long *timeval)
fd:file descriptor of mountpoint
FIFREEZE_RESET_TIMEOUT: request code for reset of timeout period
timeval: new timeout period in seconds
Return value: 0 if the operation succeeds. Otherwise, -1
Error number: If the filesystem has already been unfrozen,
errno is set to EINVAL.
Please avoid the use of the term `timeval' here. That term is closely
associated with `struct timeval', and these are not being used here. A
better identifier would be timeout_secs - it tells the reader what it
does, and it tells the reader what units it is in. And when it comes
to "time", it is very useful to tell people which units are being used,
because there are many different ones.
-static int ioctl_freeze(struct file *filp)uh-oh, compat problems.
+static int ioctl_freeze(struct file *filp, unsigned long arg)
{
+ long timeout_sec;
+ long timeout_msec;
struct super_block *sb = filp->f_path.dentry->d_inode->i_sb;
+ int error;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
@@ -159,8 +163,27 @@ static int ioctl_freeze(struct file *fil
if (sb->s_op->write_super_lockfs == NULL)
return -EOPNOTSUPP;
+ /* arg(sec) to tick value. */
+ error = get_user(timeout_sec, (long __user *) arg);>
A 32-bit application running under a 32-bit kernel will need to provide
a 32-bit quantity.
A 32-bit application running under a 64-bit kernel will need to provide
a 64-bit quantity.
I suggest that you go through the entire patch and redo this part of
the kernel ABI in terms of __u32, and avoid any mention of "long" in
the kernel<->userspace interface.
+ /* arg(sec) to tick value. */
+ error = get_user(timeout_sec, (long __user *) arg);
+ if (timeout_sec < 0)
+ return -EINVAL;
+ else if (timeout_sec < 2)
+ timeout_sec = 0;> The `else' is unneeded.
It would be clearer to code this all as:
if (timeout_sec < 0)
return -EINVAL;
if (timeout_sec == 1) {
/*
* If 1 is specified as the timeout period it is changed into 0
* to retain compatibility with XFS's xfs_freeze.
*/
timeout_sec = 0;
}
+ if (error)
+ return error;
+ timeout_msec = timeout_sec * 1000;
+ if (timeout_msec < 0)
+ return -EINVAL;
um, OK, but timeout_msec could have overflowed during the
multiplication and could be complete garbage. I guess it doesn't
matter much, but a cleaner approach might be:
if (timeout_sec > LONG_MAX/1000)
return -EINVAL;
or something like that.
But otoh, why do the multiplication at all? If we're able to handle
the timeout down to the millisecond level, why not offer that
resolution to userspace? Offer the increased time granularity to the
application?
+ if (sb) {
Can this happen?
+ if (test_and_set_bit(BD_FREEZE_OP, &sb->s_bdev->bd_state))
+ return -EBUSY;
+ if (sb->s_frozen == SB_UNFROZEN) {
+ clear_bit(BD_FREEZE_OP, &sb->s_bdev->bd_state);
+ return -EINVAL;
+ }
+ /* setup unfreeze timer */
+ if (timeout_msec > 0)
What does this test do? afacit it's special-casing the timeout_secs==0
case. Was this feature documented? What does it do?
+void add_freeze_timeout(struct block_device *bdev, long timeout_msec)
+{
+ s64 timeout_jiffies = msecs_to_jiffies(timeout_msec);
+
+ /* Set delayed work queue */
+ cancel_delayed_work(&bdev->bd_freeze_timeout);
Should this have used cancel_delayed_work_sync()?
+void del_freeze_timeout(struct block_device *bdev)
+{
+ if (delayed_work_pending(&bdev->bd_freeze_timeout))
Is this test needed?
--- linux-2.6.26-rc7-xfs/fs/xfs/xfs_fsops.c 2008-06-23 11:55:15.000000000 +0900
+++ linux-2.6.26-rc7-timeout/fs/xfs/xfs_fsops.c 2008-06-23 11:56:47.000000000 +0900
@@ -619,7 +619,7 @@ xfs_fs_goingdown(
{
switch (inflags) {
case XFS_FSOP_GOING_FLAGS_DEFAULT: {
- struct super_block *sb = freeze_bdev(mp->m_super->s_bdev);
+ struct super_block *sb = freeze_bdev(mp->m_super->s_bdev, 0);
Using NULL here is clearer and will, I expect, avoid a sparse warning.