Re: [patch] morethan900MB-2.0.36

MOLNAR Ingo (mingo@chiara.csoma.elte.hu)
Sat, 5 Dec 1998 00:17:06 +0100 (CET)


On Fri, 4 Dec 1998, Leonard N. Zubkoff wrote:

> we _definitely_ need support for more than 2G physical RAM. (eg. the
> Torrent system demo on OracleWorld was run on a box with 2.5G RAM.) Also,
> a 2G+2G split has performance disadvantages if the box in question has
> less memory (eg. 1G RAM).
>
> While allowing for more than 2GB physical memory makes sense in some
> cases, the resulting limitation on the virtual address space size is
> going to be highly problematic for many applications. For example,
> Oracle wants to use as much memory as possible for its database block
> buffer cache which is mapped by all Oracle processes as shared memory.
> [...]

Oracle has a problem with their process architecture being limited to
32-bit virtual memory on x86. They have to and will solve this anyway, to
support NT's (and other OS's) 4M-pages 36-bit RAM extensions. There are
ways around this even with the current model: multiple instances of the
server can be used, this is perfectly possible and many bigger Oracle
sites run this way. (several departmental isolated servers on the same
box, casual communication between them via SQL*Net over local IPC)

but having one big application limited to 32-bits doesnt mean we should
forget about other applications ...

> Similarly, large scientific applications probably need large individual
> data segments. [...]

yes. One of the early testers of the original patch had a simulation
system on a 4-ways PPro box, each process had it's own ~600-700M segment
which it did number crunching on. This simulation setup, with it's current
architecture could use up to ~3.2G physical memory. (3.2G RAM, 4x800M
processes, one per CPU)

> [...] It looks to me like the only systems that can make use of
> more than 2GB physical memory will have a number of smaller processes
> that don't share memory.

there are lots of such applications! a typical multiuser box is such an
application. a typical fileserver/webserver/newsserver/proxy is such an
application.

> What's the performance disadvantage with a 2GB+2GB split?

we are now considering the case when someone has 1G physical memory, but
is not using a 1.1G/2.9G split, but a 2.0G/2.0G split.

we want to 'spread out' mapped memory out for several reasons, but the
single most important is:

- near the 'saturation point' the system slows down if it wants to
find a given-size virtual memory chunk. This is basically a
'fragmentation issue' for virtual memory, and the basic defense
against fragmentation is having more free space :) We have to walk
lots of vmas before we find some 'free space'.

(i suspect this was one of the reasons why NT went from a 2:2 split to a
3:1 split ... not only marketing reasons.)

there are some other (non-performance) arguments too:

- a.out compatibility libraries are pretty much hosed if we map
those low addresses. (not an issue for most people)

- the likelihood of 'silent data corruption' vs. 'segmentation
fault' is smaller if you have more virtual memory for the same
amount of mapped memory, obviously. We want to see a clear
segmentation fault if a bug happens, not some silent data
corruption.

- a related issue: any kind of mmap() trick to build a safer
memory architecture (like putting a 4K 'hole' between each
allocated block) becomes harder and harder if virtual memory gets
saturated. (maybe Checker or Electric Fence is affected, not
sure)

basically, 'free virtual memory estate' is a system resource, which we
should not arbitrarily waste, thats the main concept.

-- mingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/