We don't clear the seek stat values in cfq_alloc_io_context(), and if
->seek_mean is unlucky enough to be set to -36 by chance, the first
invocation of cfq_update_io_seektime() will oops with a divide by zero
in do_div().
Just memset the entire cic instead of filling invididual values
independently.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
There's a race between shutting down one io scheduler and firing up the
next, in which a new io could enter and cause the io scheduler to be
invoked with bad or NULL data.
To fix this, we need to maintain the queue lock for a bit longer.
Unfortunately we cannot do that, since the elevator init requires to be
run without the lock held. This isn't easily fixable, without also
changing the mempool API. So split the initialization into two parts,
and alloc-init operation and an attach operation. Then we can
preallocate the io scheduler and related structures, and run the attach
inside the lock after we detach the old one.
This patch has survived 30 minutes of 1 second io scheduler switching
with a very busy io load.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Now that we select busy_rr for possible service, insert entries at the
back of that list instead of at the front.
Signed-off-by: Jens Axboe <axboe@suse.de>
There's a small window from when the timer is entered and we grab
the queue lock, where cfq_set_active_queue() could be rearming the
timer for us. Seen in the wild on a 12-way ppc box. Fix this by
just using mod_timer(), which will do the right thing for us.
Signed-off-by: Jens Axboe <axboe@suse.de>
If the hardware is doing real queueing, decide that it's worthless to
idle the hardware. It does reasonable simultaneous io in that case
anyways, and the idling hurts some work loads.
Signed-off-by: Jens Axboe <axboe@suse.de>
If we are anticipating a sync request from this process and we are
waiting for that and see an async request come in, expire that slice
and move on.
Signed-off-by: Jens Axboe <axboe@suse.de>
For just one busy queue (like async write out), we often overlooked
that we could queue more io and decided we were idle instead. This causes
us quite a bit of performance loss.
Signed-off-by: Jens Axboe <axboe@suse.de>
- Drop cic from the list when seen as dead.
- Fixup the locking, just use a simple spinlock.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
In current code, we are re-reading cic->key after dead cic->key check.
So, in theory, it may really re-read *after* cfq_exit_queue() seted NULL.
To avoid race, we copy it to stack, then use it. With this change, I
guess gcc will assign cic->key to a register or stack, and it wouldn't
be re-readed.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Jens Axboe <axboe@suse.de>
When queue dies, we set cic->key=NULL as dead mark. So, when we
traverse a rbtree, we must check whether it's still valid key. if it
was invalidated, drop it, then restart the traversal from top.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Jens Axboe <axboe@suse.de>
On rmmod path, cfq/as waits to make sure all io-contexts was
freed. However, it's using complete(), not wait_for_completion().
I think barrier() is not enough in here. To avoid the following case,
this patch replaces barrier() with smb_wmb().
cpu0 visibility cpu1
[ioc_gnone=NULL,ioc_count=1]
ioc_gnone = &all_gone NULL,ioc_count=1
atomic_read(&ioc_count) NULL,ioc_count=1
wait_for_completion() NULL,ioc_count=0 atomic_sub_and_test()
NULL,ioc_count=0 if ( && ioc_gone)
[ioc_gone==NULL,
so doesn't call complete()]
&all_gone,ioc_count=0
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Jens Axboe <axboe@suse.de>
Detect whether a given process is seeky and if so disable (mostly) the
idle window if it is. We still allow just a little idle time, just enough
to allow that process to submit a new request. That is needed to maintain
fairness across priority groups.
In some cases, we could setup several async queues. This is not optimal
from a performance POV, since we want all async io in one queue to perform
good sorting on it. It also impacted sync queues, as async io got too much
slice time.
Signed-off-by: Jens Axboe <axboe@suse.de>
this is a small optimization to cfq_choose_req() in the CFQ I/O scheduler
(this function is a semi-often invoked candidate in an oprofile log):
by using a bit mask variable, we can use a simple switch() to check
the various cases instead of having to query two variables for each check.
Benefit: 251 vs. 285 bytes footprint of cfq_choose_req().
Also, common case 0 (no request wrapping) is now checked first in code.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Jens Axboe <axboe@suse.de>
On setups with many disks, we spend a considerable amount of time
looking up the process-disk mapping on each queue of io. Testing with
a NULL based block driver, this costs 40-50% reduction in throughput
for 1000 disks.
Signed-off-by: Jens Axboe <axboe@suse.de>
Modify well over a dozen mempool users to call mempool_create_slab_pool()
rather than calling mempool_create() with extra arguments, saving about 30
lines of code and increasing readability.
Signed-off-by: Matthew Dobson <colpatch@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We don't need to pin ->key down; ->cfqq->cfqd will do that for us.
Incidentally, that stops the leak we had - that reference was never
dropped.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
If somebody does a hash lookup for cfq_queue while ioprio of an async queue
is elevated, they shouldn't end up stuck with lowered ioprio when we go back.
Fix is to use ->org_ioprio{,class} in hash lookups.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
During testing of SLES10, we encountered a hang in the CFQ io scheduler.
Turns out the deferred slice expiry logic is buggy, so remove that for
now. We could be left with an idle queue that would never wake up. So
kill that logic, always expire immediately. Also fix a potential timer
race condition.
Patch looks bigger than it is, because it moves a function.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
the patch below marks various read-only variables in block/* as const,
so that gcc can optimize the use of them; eg gcc will replace the use by
the value directly now and will even remove the memory usage of these.
Signed-off-by: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Jens Axboe <axboe@suse.de>
Some leftover comments referring to drivers/block that are now block/.
They don't add any information we don't already have, so kill them.
Signed-off-by: Coywolf Qi Hunt <qiyong@fc-cn.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
When cfq slice expires, remainder of slice is calculated and stored in
cfqq->slice_left. Current code calculates the opposite of remainder -
how many jiffies the cfqq has used past slice end. This patch fixes
the bug.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
cfq forced dispatching might not return all requests on the queue.
This bug can hang elevator switchinig and corrupt request ordering
during flush sequence.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
drivers/block/ is right now a mix of core and driver parts. Lets move
the core parts to a new top level directory. Al will move the fs/
related block parts to block/ next.
Signed-off-by: Jens Axboe <axboe@suse.de>
Don't clear ->elevator_data on exit, if we are switching queues we are
overwriting the data of the new io scheduler.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We're trying to get rid of as much as possible tasklist walks, or at
least moving them to core code. This patch falls into the second
category.
Instead of walking the tasklist in cfq-iosched move that into
elv_unregister. The added benefit is that with this change the as
ioscheduler might be might unloadable more easily aswell.
The new code uses read_lock instead of read_lock_irq because the
tasklist_lock only needs irq disabling for writers.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch updates all four ioscheds to use generic dispatch
queue. There's one behavior change in as-iosched.
* In as-iosched, when force dispatching
(ELEVATOR_INSERT_BACK), batch_data_dir is reset to REQ_SYNC
and changed_batch and new_batch are cleared to zero. This
prevernts AS from doing incorrect update_write_batch after
the forced dispatched requests are finished.
* In cfq-iosched, cfqd->rq_in_driver currently counts the
number of activated (removed) requests to determine
whether queue-kicking is needed and cfq_max_depth has been
reached. With generic dispatch queue, I think counting
the number of dispatched requests would be more appropriate.
* cfq_max_depth can be lowered to 1 again.
Original from Tejun Heo, modified version applied.
Signed-off-by: Jens Axboe <axboe@suse.de>
The reference count fix merged isn't fully bug free. It doesn't leak
now, but instead it crashes due to looking at freed memory. So for now,
lets reverse the change and I'll fix it for real next week.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
I ran across a memory leak related to the cfq scheduler. The cfq
init function increments the refcnt of the associated request_queue.
This refcount gets decremented in cfq's exit function. Since blk_cleanup_queue
only calls the elevator exit function when its refcnt goes to zero, the
request_q never gets cleaned up. It didn't look like other io schedulers were
incrementing this refcnt, so I removed the refcnt increment and it fixed the
memory leak for me.
To reproduce the problem, simply use cfq and use the scsi_host scan sysfs
attribute to scan "- - -" repeatedly on a scsi host and watch the memory
vanish.
Signed-off-by: Brian King <brking@us.ibm.com>
Acked-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
One critical fix and two minor fixes for 2.6.13-rc7:
- Max depth must currently be 2 to allow barriers to function on SCSI
- Prefer sync request over async in choosing the next request
- Never allow async request to preempt or disturb the "anticipation" for
a single cfq process context. This is as-designed, the code right now
is buggy in that area.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
CFQ will currently stall when using write barriers and the default
max_depth setting of 1, since we artificially need a depth of 2 when
pre-pending the first flush. So never deny the barrier request going to
the device.
This is a regression since 2.6.12, it was found in SUSE testing.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
drivers/block/cfq-iosched.c: In function 'cfq_put_queue':
drivers/block/cfq-iosched.c:303: sorry, unimplemented: inlining failed in call to 'cfq_pending_requests': function body not available
drivers/block/cfq-iosched.c:1080: sorry, unimplemented: called from here
drivers/block/cfq-iosched.c: In function '__cfq_may_queue':
drivers/block/cfq-iosched.c:1955: warning: the address of 'cfq_cfqq_must_alloc_slice', will always evaluate as 'true'
make[1]: *** [drivers/block/cfq-iosched.o] Error 1
make: *** [drivers/block/cfq-iosched.o] Error 2
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jens Axboe <axboe@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
If cfq is managing a queue and a new scheduler is later selected, it is
possible for the cfqd unplug_work work to be queued after the kblockd
work struct has been flushed. The problem is the ordering of
cfq_shutdown_timer_wq() and blk_put_queue() in cfq_put_cfqd(). The
latter may rearm the work, leaving cfq_kick_queue() with dead data.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
- Adjust slice values
- Instead of one async queue, one is defined per priority level. This
prevents kernel threads (such as reiserfs/x and others) that run at
higher io priority from conflicting with others. Previously, it was a
coin toss what io prio the async queue got, it was defined by who
first set up the queue.
- Let a time slice only begin, when the previous slice is completely
done. Previously we could be somewhat unfair to a new sync slice, if
the previous slice was async and had several ios queued. This might
need a little tweaking if throughput suffers a little due to this,
allowing perhaps an overlap of a single request or so.
- Optimize the calling of kblockd_schedule_work() by doing it only when
it is strictly necessary (no requests in driver and work left to do).
- Correct sync vs async logic. A 'normal' process can be purely async as
well, and a flusher can be purely sync as well. Sync or async is now a
property of the class defined and requests pending. Previously writers
could be considered sync, when they were really async.
- Get rid of the bit fields in cfqq and crq, use flags instead.
- Various other cleanups and fixes
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
In cfq_find_next_crq(), cfq tries to find the next request by choosing
one of two requests before and after the current one. Currently, when
choosing the next request, if there's no next request, the next
candidate is NULL, resulting in selection of the previous request. This
results in weird scheduling. Once we reach the end, we always seek
backward.
The correct behavior is using the first request as the next candidate.
cfq_choose_req() already has logics for handling wrapped requests.
Signed-off-by: Tejun Heo <htejun@gmail.com>
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This updates the CFQ io scheduler to the new time sliced design (cfq
v3). It provides full process fairness, while giving excellent
aggregate system throughput even for many competing processes. It
supports io priorities, either inherited from the cpu nice value or set
directly with the ioprio_get/set syscalls. The latter closely mimic
set/getpriority.
This import is based on my latest from -mm.
Signed-off-by: Jens Axboe <axboe@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
sysfs: fix drivers/block so if an attribute doesn't implement
show or store method read/write will return -EIO
instead of 0 or -EINVAL.
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>