On 5/27/05, Kyle Moffett <mrmacman_g4@xxxxxxx> wrote:"context switch + useful work" time, and goes to sleep if it thinks
it has enough
time to spare.
Problems:
You can't nest these. You also can't take a normal semaphore inside
one. The
only useable locking order for these is:
..., semaphore, semaphore, spinaphore, spinlock, spinlock, ...
I don't see why very careful nesting wouldn't work. Because you
could get the count up on a locked-out lock? The problems of VMS
asynchronous traps :) the outer ones would have higher hold times
than the inner ones.
struct spinaphore {unsigned long acceptable_wait_time; /* dynamic tuning */
atomic_t queued;
atomic_t hold_time;
spinlock_t spinlock;
unsigned long acquire_time;
};
void spinaphore_lock (struct spinaphore *sph) {
unsigned long start_time = fast_monotonic_count();
int queue_me = 1;
until (likely(spin_trylock(&sph->spinlock))) {
/* Get the queue count (And ensure we're queued in the
process) */
unsigned int queued = queue_me ?
atomic_inc_return(&sph->queued) :
queued = atomic_get(&sph->queued);
queue_me = 0;
/* Figure out if we should switch away */
if (unlikely(CONFIG_SPINAPHORE_CONTEXT_SWITCH <
( queued*atomic_get(&sph->hold_time) -
fast_monotonic_count() - start_time
we could subtract the average lock-held time from the time that
the current lock has been held to find an expected time until
the lock becomes free, so we only try spinning when the current
holder of the lock is nearly done. Hmm what other metrics would
be easy to gather?
))) {
/* Remove ourselves from the wait pool (remember to re-
add later) */
atomic_dec(&sph->queued);
queue_me = 1;
/* Go to sleep */
cond_resched();
}
}
/* Dequeue ourselves and update the acquire time */
atomic_dec(&sph->queued);
if(contention)atomic_dec(&sph->queued);
when there was no contention we didn't increment.
sph->acquire_time = fast_monotonic_count();
}
void spinaphore_unlock (struct spinaphore *sph) {
/* Update the running average hold time */
atomic_set(&sph->hold_time, (4*atomic_get(&sph->hold_time) +
(fast_monotonic_count() - sph->acquire_time))/5);
These don't need to be atomic functions, since we haven't released
the lock yet, or is there a risk that nonatomic gets and sets will get
deferred? no I'm sorry atomic_[get|set] pertains to operations on
atomic_t data is that correct?
/* Actually unlock the spinlock */
spin_unlock(&sph->spinlock);
}
Cheers,
Kyle Moffett
is there a schedule-that-function-next call? The spinaphore idea is that
instead of simply yielding until later (cond_resched) we register ourselves
with the sph object, with a linked list, an actual queue instead of a count
of queued threads -- and at unlocking time, if there's a queue, the head of
the line gets the service next. Which would scale to a lot of CPUs, still with
a spinlock around the setting of the head-of-line pointer.