« データが届かない | メイン | 仕事が大分片付いたので楽しい旅行へいってきます。 »
2007年11月27日
Linux Kernel 2.6.23.9がきています。 >>Linux
Linux Kernel 2.6.23.9がきています。
その他libata,reiserfsなど。 CVEはありません。
softlockup: use cpu_clock() instead of sched_clock()
sched_clock() is not a reliable time-source, use cpu_clock() instead.
softlockup watchdog fixes and cleanups
This is a merge of commits a5f2ce3c6024a5bb895647b6bd88ecae5001020a and
43581a10075492445f65234384210492ff333eba in mainline to fix a warning in
the 2.6.23.3 kernel release.
softlockup watchdog: style cleanups
kernel/softirq.c grew a few style uncleanlinesses in the past few
months, clean that up. No functional changes:
text data bss dec hex filename
1126 76 4 1206 4b6 softlockup.o.before
1129 76 4 1209 4b9 softlockup.o.after
( the 3 bytes .text increase is due to the "<1>" appended to one of
the printk messages. )
ntp: fix typo that makes sync_cmos_clock erratic
patch fa6a1a554b50cbb7763f6907e6fef927ead480d9 in mainline.
ntp: fix typo that makes sync_cmos_clock erratic
Fix a typo in ntp.c that has caused updating of the persistent (RTC)
clock when synced to NTP to behave erratically.
When debugging a freeze that arises on my AMD64 machines when I
run the ntpd service, I added a number of printk's to monitor the
sync_cmos_clock procedure. I discovered that it was not syncing to
cmos RTC every 11 minutes as documented, but instead would keep trying
every second for hours at a time. The reason turned out to be a typo
in sync_cmos_clock, where it attempts to ensure that
update_persistent_clock is called very close to 500 msec. after a 1
second boundary (required by the PC RTC's spec). That typo referred to
"xtime" in one spot, rather than "now", which is derived from "xtime"
but not equal to it. This makes the test erratic, creating a
"coin-flip" that decides when update_persistent_clock is called - when
it is called, which is rarely, it may be at any time during the one
second period, rather than close to 500 msec, so the value written is
needlessly incorrect, too.
Fix divide-by-zero in the 2.6.23 scheduler code
No patch in mainline as this logic has been removed from 2.6.24 so it is
not necessary.
https://bugzilla.redhat.com/show_bug.cgi?id=340161
The problem code has been removed in 2.6.24. The below patch disables
SCHED_FEAT_PRECISE_CPU_LOAD which causes the offending code to be skipped
but does not prevent the user from enabling it.
The divide-by-zero is here in kernel/sched.c:
static void update_cpu_load(struct rq *this_rq)
{
u64 fair_delta64, exec_delta64, idle_delta64, sample_interval64, tmp64;
unsigned long total_load = this_rq->ls.load.weight;
unsigned long this_load = total_load;
struct load_stat *ls = &this_rq->ls;
int i, scale;
this_rq->nr_load_updates++;
if (unlikely(!(sysctl_sched_features & SCHED_FEAT_PRECISE_CPU_LOAD)))
goto do_avg;
/* Update delta_fair/delta_exec fields first */
update_curr_load(this_rq);
fair_delta64 = ls->delta_fair + 1;
ls->delta_fair = 0;
exec_delta64 = ls->delta_exec + 1;
ls->delta_exec = 0;
sample_interval64 = this_rq->clock - ls->load_update_last;
ls->load_update_last = this_rq->clock;
if ((s64)sample_interval64 < (s64)TICK_NSEC)
sample_interval64 = TICK_NSEC;
if (exec_delta64 > sample_interval64)
exec_delta64 = sample_interval64;
idle_delta64 = sample_interval64 - exec_delta64;
======> tmp64 = div64_64(SCHED_LOAD_SCALE * exec_delta64, fair_delta64);
tmp64 = div64_64(tmp64 * exec_delta64, sample_interval64);
this_load = (unsigned long)tmp64;
do_avg:
/* Update our load: */
for (i = 0, scale = 1; i < CPU_LOAD_IDX_MAX; i++, scale += scale) {
unsigned long old_load, new_load;
/* scale is effectively 1 << i now, and >> i divides by scale */
old_load = this_rq->cpu_load[i];
new_load = this_load;
this_rq->cpu_load[i] = (old_load*(scale-1) + new_load) >> i;
}
}
For stable only; the code has been removed in 2.6.24.
トラックバック
現在、この記事はトラックバックを受け付けておりません。
コメント
現在、この記事はコメントを受け付けておりません。