[no subject]
From: root
Date: Tue May 24 2005 - 04:29:52 EST
by smtp.nexlab.net (Postfix) with ESMTP id EB669FB6B
for <chiakotay@xxxxxxxxx>; Tue, 24 May 2005 10:01:41 +0200 (CEST)
Received: (majordomo@xxxxxxxxxxxxxxx) by vger.kernel.org via listexpand
id S261257AbVEXFqX (ORCPT <rfc822;chiakotay@xxxxxxxxx>);
Tue, 24 May 2005 01:46:23 -0400
Received: (majordomo@xxxxxxxxxxxxxxx) by vger.kernel.org id S261271AbVEXFqX
(ORCPT <rfc822;linux-kernel-outgoing>);
Tue, 24 May 2005 01:46:23 -0400
Received: from e34.co.us.ibm.com ([32.97.110.132]:6116 "EHLO e34.co.us.ibm.com")
by vger.kernel.org with ESMTP id S261257AbVEXFqP (ORCPT
<rfc822;linux-kernel@xxxxxxxxxxxxxxx>);
Tue, 24 May 2005 01:46:15 -0400
Received: from westrelay02.boulder.ibm.com (westrelay02.boulder.ibm.com [9.17.195.11])
by e34.co.us.ibm.com (8.12.10/8.12.9) with ESMTP id j4O5kFcE277036
for <linux-kernel@xxxxxxxxxxxxxxx>; Tue, 24 May 2005 01:46:15 -0400
Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168])
by westrelay02.boulder.ibm.com (8.12.10/NCO/VER6.6) with ESMTP id j4O5kFPd178758
for <linux-kernel@xxxxxxxxxxxxxxx>; Mon, 23 May 2005 23:46:15 -0600
Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1])
by d03av02.boulder.ibm.com (8.12.11/8.13.3) with ESMTP id j4O5kEh4018369
for <linux-kernel@xxxxxxxxxxxxxxx>; Mon, 23 May 2005 23:46:14 -0600
Received: from snowy.in.ibm.com (snowy.in.ibm.com [9.182.12.251])
by d03av02.boulder.ibm.com (8.12.11/8.12.11) with ESMTP id j4O5k7C8018274;
Mon, 23 May 2005 23:46:13 -0600
Received: by snowy.in.ibm.com (Postfix, from userid 502)
id CE61B24A92; Tue, 24 May 2005 11:16:17 +0530 (IST)
Date: Tue, 24 May 2005 11:16:17 +0530
From: Srivatsa Vaddagiri <vatsa@xxxxxxxxxx>
To: Ashok Raj <ashok.raj@xxxxxxxxx>
Cc: Andi Kleen <ak@xxxxxx>, zwane@xxxxxxxxxxxxxxxx,
discuss@xxxxxxxxxx, shaohua.li@xxxxxxxxx,
linux-kernel@xxxxxxxxxxxxxxx, rusty@xxxxxxxxxxxxxxxx
Subject: Re: [discuss] Re: [patch 0/4] CPU hot-plug support for x86_64
Message-ID: <20050524054617.GA5510@xxxxxxxxxx>
Reply-To: vatsa@xxxxxxxxxx
References: <20050520221622.124069000@xxxxxxxxxxxxxxxxxxxxxxx> <20050523164046.GB39821@xxxxxx> <20050523095450.A8193@xxxxxxxxxxxxxxxxxxxx> <20050523171212.GF39821@xxxxxx> <20050523104046.B8692@xxxxxxxxxxxxxxxxxxxx>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20050523104046.B8692@xxxxxxxxxxxxxxxxxxxx>
User-Agent: Mutt/1.4.1i
Sender: linux-kernel-owner@xxxxxxxxxxxxxxx
Precedence: bulk
X-Mailing-List: linux-kernel@xxxxxxxxxxxxxxx
On Mon, May 23, 2005 at 10:40:46AM -0700, Ashok Raj wrote:
> Iam not a 100% sure about above either, if the smp_call_function
> is started with 3 cpus initially, and 1 just came up, the counts in
> the smp_call data struct could be set to 3 as a result of the new cpu
> received this broadcast as well, and we might quit earlier in the wait.
True.
> sending to only relevant cpus removes that ambiquity.
Or grab the 'call_lock' before setting the upcoming cpu in the online_map.
This should also avoid the race when a CPU is coming online.
--
Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/