Re: GGI, EGCS/PGCC, Kernel source

Torgeir Veimo (
Tue, 24 Feb 1998 15:36:16 +0000

Alan Cox wrote:
> > Seriously. One cannot do direct rendering OpenGL-in-a-window under X
> > without GGI. The issue is multiple threads accessing the hw
> > simultanously with hw graphics context switching. You don't want the X
> > server to handle that.
> You do. It is far cheaper to put GLX in the server. Then worry about optimising
> the calls. You don't want every S3 chip access going via a syscall. KGI has
> some of the right ideas in bulking them. Even with a KGI MESA should go via

The point with direct rendering is to _not_ encode OpenGL calls into a
GLX protocol. You simply call the native OpenGL library on the machine.

Since it accesses the hardware directly you will need to communicate
between it and the X server _where_ it should render. If you have decent
hardware you won't need much communication at all, since gfx context
switching takes care of setting the OpenGL's rendering process' clipping
window right, and resetting it when X access the hw again.

The hooks to do gfx hw context switching are not yet done in ggi. But it
needs to be in the kernel. One cannot let the X server do gfx hw context

GLX is necessary inside the X server, but only to decode incoming remote
OpenGL/GLX clients. The GLX stub in the X server should also use a local
native OpenGL library, or else one would need to multithread the X
server to get decent performance when the hw is slow. There are other
reasons for this too, but they are irrellevant to this discussion.

Did you find the mentioned papers worthwile?

Please se the section

"For most OpenGL commands, the context switch overhead can be amortized
over multiple GLX requests by streaming protocol requests. The table's
glVertex3f example (whereby OpenGL sends a 3D vertex to the hardware; a
very common OpenGL operation) is handled in this way. Even with
streaming, the indirect case incurs the overhead of encoding, transport,
and decoding GLX protocol for requests and replies."

"The direct case can eliminate the overhead associated with context
switching between the OpenGL program and the X server, protocol
encoding, transport, and decoding when performing OpenGL rendering.
Direct rendering also improves cache and TLB behavior by avoiding
frequent context switches and multiple active contexts [3]."

I agree that such schemes might be impossible to implement on common
hardware, but there allready exists checp cards that do this, among
others those based on the Permedia2 chipset, which are <$250. That is
not an issue here I belive.

The two most important papers on this subject are

[Btw. Mark Kilgard which is the author of these papers have just left
SGI. I wonder where he is going to. Another hw company maybe?]

If you are interested in this further I very interested in discussing it
in more details. There is also a need for hard figures. Currently there
is a nearly finished GLINT hw driver for GGI, but it doesn't implement
acceleration yet. When things evolve I can post hard figures.

Torgeir Veimo, Vertech AS,

email:, mobile: 90673881, office: +47 55563755

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to