Re: [RFC] Fix 2.6.33 x86 regression to kgdb hw breakpoints - duetoperf API changes

From: Frederic Weisbecker
Date: Wed Dec 30 2009 - 13:01:37 EST


On Wed, Dec 30, 2009 at 10:53:11AM -0600, Jason Wessel wrote:
> Frederic Weisbecker wrote:
> > We could probably have a helper that allocates a disabled breakpoint
> > without reserving it.
>
> I worked around that restriction for now, in the current version of the
> kgdb patches. When kgdb registers with the die notifier in its init
> phase, it allocates the perf structures via the perf API and
> subsequently disables the breakpoints with the low level API.



It disables it but then the breakpoint still has a reserved slot,
that looks too much for an opt-in config that may or
may not be used. One slot among four would be irremediably
unavailable from userspace once we set CONFIG_KGDB, if I understand well...
Is it doing so in the beginning of a debugging session or
at boot time?



> > But the problem remains: you'll need to take
> > locks when you eventually reserve it and when you activate it.
> >
> > The fact that it can happen from nmi is really a problem.
> >
> >
>
> I talked with Jan a bit with respect to this problem. He recommended to
> possibly allow kgdb to obtain hw breakpoints locklessly and to break
> reservations that exist with the low level API. The current patch in
> the kgdb series does not break reservations, it only uses a slot that is
> not already in use. Let us call the scenarios A and B.
>
> A) allow kgdb to break existing reservations
> B) kgdb can use what is not reserved, without locks
>
> What is missing right now is a notification mechanism and a separate
> count for the debugger as to what is in use. I tend to think that B is
> the right default approach, but Jan was leaning towards scenario A.



A looks dangerous, in that an overflow of the possible number of
breakpoints would make them fighting for the debug registers.
Only 4 of them will make it :) (in x86)
I guess that in overflow case, the plan is to kick out one of the
running breakpoints.
I see several problems in this preempt scheme:

- which breakpoint should we preempt?
- that will bring some complexities in the current code. You'll need
to keep track of the preempted breakpoint, deactivate, reactivate it
etc... Moreover the activation things require to take some locks.

I don't think A is good idea.

B looks feasible, but only at the cost of using a best effort
try (in case of NMI).
You'll need to check if the current lock that protects the reservation
datas is already taken. If so you have to give up.
That said this should be fine as the breakpoint reservation is a rare
path, and I doubt one would try to set up a breakpoint in kgdb in the same
time perf does, or any other users.

That said, either A or B, you'll need to take locks to activate/deactivate
the breakpoints.



> > Is there any possibility that we know the user has started a
> > kgdb session, and then reserve as much hardware breakpoints
> > as we can in kgdb at this time?
> >
> >
>
> That is the way I implemented it the first time. Reserve all the slots,
> and then nothing else could use them. That didn't work out too well
> because then the user space could not make use of hw breakpoints,
> granted this never worked before with user space + kernel space sharing
> between ptrace and kgdb.



Say one opens a kgdb session. A very cool thing would be to reserve
as much breakpoints as we can (without prempting existing ones)
and release them once the session is closed. This is not a problem that
userspace can't reserve new ones during this session, we are debugging
the kernel at this time. I doubt we need userspace breakpoints at the same
time. Do we?

That said I really lack some kgdb background. In which context the
user is connecting to kgdb? The above would only work in a non-NMI
context.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/