Re: [PATCH v2 1/2] mbcache: decoupling the locking of local fromglobal data

From: Linus Torvalds
Date: Thu Aug 22 2013 - 12:53:50 EST


On Thu, Aug 22, 2013 at 8:54 AM, T Makphaibulchoke <tmac@xxxxxx> wrote:
>
> +#define MB_LOCK_HASH_CHAIN(hash_head) hlist_bl_lock(hash_head)
> +#define MB_UNLOCK_HASH_CHAIN(hash_head) hlist_bl_unlock(hash_head)
> +#ifdef MB_CACHE_DEBUG
> +#define MB_LOCK_BLOCK_HASH(ce) do {

Please don't do these ugly and pointless preprocessor macro expanders
that hide what the actual operation is.

The DEBUG case seems to be just for your own testing anyway, so even
that shouldn't exist in the merged version.

> +#define MAX_LOCK_RETRY 1024
> +
> +static inline int __mb_cache_lock_entry_block_hash(struct mb_cache_entry *ce)
> +{
> + int nretry = 0;
> + struct hlist_bl_head *block_hash_p = ce->e_block_hash_p;
> +
> + do {
> + MB_LOCK_HASH_CHAIN(block_hash_p);
> + if (ce->e_block_hash_p == block_hash_p)
> + return 0;
> + MB_UNLOCK_HASH_CHAIN(block_hash_p);
> + block_hash_p = ce->e_block_hash_p;
> + } while (++nretry < MAX_LOCK_RETRY);
> + return -1;
> +}

And this is really ugly. Again it's also then hidden behind the ugly macro.

First off, the thousand-time retry seems completely excessive. Does it
actually need any retry AT ALL? If the hash entry changes, either you
should retry forever, or if you feel that can result in livelocks
(fair enough) and you need a fallback case to a bigger lock, then why
not just do the fallback immediately?

More importantly, regardless of that retry issue, this seems to be
abstracted at the wrong level, resulting in every single user of this
repeating the same complex and hard-to-understand incantation:

> + if (MB_LOCK_ENTRY_BLOCK_HASH(ce)) {
> + spin_lock(&mb_cache_spinlock);
> + list_add_tail(&ce->e_lru_list, &mb_cache_lru_list);
> + spin_unlock(&mb_cache_spinlock);
> + continue;
> + }
..
> + if (MB_LOCK_ENTRY_BLOCK_HASH(ce)) {
> + spin_lock(&mb_cache_spinlock);
> + list_add_tail(&ce->e_lru_list, &mb_cache_lru_list);
> + spin_unlock(&mb_cache_spinlock);
> + continue;
> + }
..
> + if (MB_LOCK_ENTRY_BLOCK_HASH(ce)) {
> + spin_lock(&mb_cache_spinlock);
> + list_add_tail(&ce->e_lru_list,
> + &mb_cache_lru_list);
> + continue;
> + }

where the only difference is that the last one doesn't unlock
afterwards because it runs in a loop with that LRU list lock held.
Ugh.

The locking logic also isn't explained anywhere, making the
hard-to-read code even harder to read.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/