[2.3.30pre2] Memory leak in loop device?

Simon Kirby (sim@stormix.com)
Sat, 27 Nov 1999 22:52:02 -0500


There appears to be a very reproducible memory leak in the loop device.
I used a loop device to verify a small CD image with the following
commands:

mount -o loop cd.img /image
find /loop -type f |xargs cat >/dev/null
umount /loop

After this, I noticed my system swapping more often than usual when under
heavy memory loads. I decided to see if it was the loop device I had
used earlier, and so I rebooted to get a clean test environment.

Here is the output of "vmstat 1" while I run the commands (some vmstat
whitespace removed to hopefully avoid wordwrapping). This starts just
after the fresh boot. I have a program called "swap" that malloc()s
forever until it's aborted, and I ran this first to push out some stuff
to swap to get a clearer picture. I ran the mount just before starting
"vmstat 1".

procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 6204 118704 368 3768 0 0 0 0 117 38 0 1 99
0 0 0 6204 118704 368 3768 0 0 0 0 119 44 0 0 99

Ran "find" command.

0 1 0 6204 113160 2768 6248 0 0 3689 0 2531 4878 1 6 92
1 0 0 6204 99596 8928 12460 0 0 9260 0 6258 12340 0 19 80
0 1 0 6204 85192 15476 19044 0 0 9844 0 6652 13126 0 24 75
0 1 0 6204 76348 19488 23088 0 0 6022 0 4103 8027 1 11 88
1 0 0 6204 68628 23008 26612 0 0 5287 0 3626 7076 0 10 90
0 1 0 6204 52608 30324 33904 0 0 10971 0 7427 14672 0 20 79
0 1 0 6204 45648 33504 37072 0 0 4759 0 3278 6385 0 8 92
1 0 0 6204 37548 37204 40756 0 0 5541 0 3808 7426 0 12 88
0 1 0 6204 27392 41828 45384 0 0 6933 0 4746 9259 0 11 89
1 0 0 6204 18448 45900 49476 0 0 6126 0 4185 8184 0 11 89
1 0 0 6204 3436 52756 56304 0 0 10270 0 6959 13735 0 19 81
0 1 0 5140 2748 54304 55376 0 0 4397 0 3036 6024 0 8 91
0 1 0 5140 2376 54508 55520 0 0 7955 0 5412 10808 1 13 86
0 1 0 5140 2988 54228 55204 0 0 5683 0 3883 7714 0 15 85
0 0 0 5140 2348 54620 55608 0 0 4436 0 3059 6017 1 10 89

Find finishes.

0 0 0 5140 2348 54620 55608 0 0 0 0 103 26 1 0 99
0 0 0 5140 2348 54620 55608 0 0 0 0 101 32 0 0 99
0 0 0 5140 2348 54620 55608 0 0 0 0 106 32 1 0 99
0 0 0 5140 2348 54620 55608 0 0 0 0 106 26 0 0 99
0 0 0 5140 2348 54620 55608 0 0 0 0 113 36 1 0 99
0 1 0 5140 3220 54056 55284 8 0 256 0 158 92 0 1 99

Ran "umount" command.

0 0 0 5140 3260 54064 1384 0 0 12 11 121 54 0 2 97
0 0 0 5140 3260 54064 1384 0 0 0 0 102 31 1 0 99
0 1 0 5140 3128 54096 1384 0 0 29 0 155 74 1 0 99

Ran "swap" program (malloc()s forever, trying to make system swap).

1 1 0 10416 2720 108 1640 144 6980 522 1744 386 1567 1 20 79
2 0 1 20284 3156 108 2080 32 8288 8 2074 305 959 0 10 90

"swap" program aborted.

0 0 0 5972 56528 112 2136 28 6088 152 1522 267 824 0 7 92
0 0 0 5972 56528 112 2136 0 0 0 0 104 26 0 0 99
1 0 0 5972 56528 112 2136 0 0 0 0 102 28 0 1 99

Note that "buffers" and "cache" are low, but there is only 56 MB free
now. Doing this process again shows only 24 MB free.

It may be worth noting that when "buffers" shows 112 in vmstat output,
/proc/slabinfo shows:

slabinfo - version: 1.0
kmem_cache 35 42
ip_fib_hash 13 127
nfs_wreq 0 0
nfs_dcookie 0 0
nfs_fh 3 63
tcp_tw_bucket 0 0
tcp_bind_bucket 13 127
tcp_open_request 0 0
ip_dst_cache 5 25
arp_cache 3 31
skbuff_head_cache 32 42
sock 44 50
filp 475 504
inode_cache 186 528
signal_queue 0 0
kiobuf 0 0
buffer_head 35339 81034
mm_struct 40 50
vm_area_struct 980 1008
dentry_cache 180 775
files_cache 40 45
uid_cache 4 127
size-131072 0 0
size-65536 0 0
size-32768 0 0
size-16384 17 17
size-8192 3 3
size-4096 2 4
size-2048 82 84
size-1024 21 24
size-512 50 56
size-256 28 42
size-128 414 425
size-64 99 126
size-32 215 441
slab_cache 53 126

Lots of buffer_heads?

Simon-

[ Stormix Technologies Inc. ][ NetNation Communcations Inc. ]
[ sim@stormix.com ][ sim@netnation.com ]
[ Opinions expressed are not necessarily those of my employers. ]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/