vk/allocator: Fix a data race in the state pool

The previous algorithm had a race because of the way we were using
__sync_fetch_and_add for everything.  In particular, the concept of
"returning" over-allocated states in the "next > end" case was completely
bogus.  If too many threads were hitting the state pool at the same time,
it was possible to have the following sequence:

A: Get an offset (next == end)
B: Get an offset (next > end)
A: Resize the pool (now next < end by a lot)
C: Get an offset (next < end)
B: Return the over-allocated offset
D: Get an offset

in which case D will get the same offset as C.  The solution to this race
is to get rid of the concept of "returning" over-allocated states.
Instead, the thread that gets a new block simply sets the next and end
offsets directly and threads that over-allocate don't return anything and
just futex-wait.  Since you can only ever hit the over-allocate case if
someone else hit the "next == end" case and hasn't resized yet, you're
guaranteed that the end value will get updated and the futex won't block
forever.
This commit is contained in:
Jason Ekstrand
2015-08-03 00:38:48 -07:00
parent 481122f4ac
commit fd64598462
+5 -5
View File
@@ -424,15 +424,15 @@ anv_fixed_size_state_pool_alloc(struct anv_fixed_size_state_pool *pool,
if (block.next < block.end) {
return block.next;
} else if (block.next == block.end) {
new.next = anv_block_pool_alloc(block_pool);
new.end = new.next + block_pool->block_size;
old.u64 = __sync_fetch_and_add(&pool->block.u64, new.u64 - block.u64);
offset = anv_block_pool_alloc(block_pool);
new.next = offset + pool->state_size;
new.end = offset + block_pool->block_size;
old.u64 = __sync_lock_test_and_set(&pool->block.u64, new.u64);
if (old.next != block.next)
futex_wake(&pool->block.end, INT_MAX);
return new.next;
return offset;
} else {
futex_wait(&pool->block.end, block.end);
__sync_fetch_and_add(&pool->block.u64, -pool->state_size);
goto restart;
}
}