The upcoming introduction of NIR->vec4 pass will require that some NIR
lowering passes are enabled/disabled depending on the type of shader
(scalar vs. vector).
With this patch we pass a 'is_scalar' variable to the process of
constructing the NIR, to let an external context decide how the shader
should be handled.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
It basically allocates registers local to a function in a nir_locals map,
then emits all its control-flow blocks.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
Similar to other variable setups, system values will initialize the
corresponding register inside a 'nir_system_values' map, which will then
be queried later when processing the different system value intrinsics
for the appropriate register.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
The new virtual method is more flexible, it has a signature:
dst_reg *make_reg_for_system_value(int location, const glsl_type *type);
v2 (Jason Ekstrand):
Use the new version in unit tests so make check passes again
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
This implementation sets up a map of input variable offsets to source registers
that are already initialized with the corresponding register offset.
This map will then be queried when processing load_input intrinsic operations,
to obtain the correct register source from which the input data will be loaded.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
The type_size() method is currently accessible only in the implementation
of vec4_visitor. Since we need to reuse it in the upcoming NIR->vec4 pass,
lets make it a method of the class instead.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
The NIR->vec4 pass will be activated if both the following conditions are met:
* INTEL_USE_NIR environment variable is defined and is positive (1 or true)
* The stage is vertex shader (support for geometry shaders and
ARB_vertex_program will be added later).
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
This patch will add a brw_vec4_nir.cpp file filled with entry point methods to
the main functionality, following a structure similar to brw_fs_nir.cpp.
Subsequent patches in this series will be adding the implementations for these
methods, incrementally.
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
I'm not sure what the true meaning of "The rounding mode may vary." is,
but it is the case that the IROUND() path rounds differently than the
other paths (and does it wrong, at that).
Like _mesa_roundeven{f,}(), just add an use _mesa_lroundeven{f,}() that
has known semantics.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
To circumvent a problem occuring when LINEAR_ALIGNED array mode is
selected on a TEXTURE_2D RAT.
This configuration causes MEM_RAT STORE_TYPED to write to incorrect
locations.
The only values allowed are 0 and 1, and the value is checked before
assigning.
This is a copy of 8eeca7a56c that seems to have been made to the glsl
ir type after it was copied for use in nir but before nir landed.
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Read-only and write-only image arguments are recognized and
distinguished.
Attributes of the image arguments are passed to the kernel as implicit
arguments.
parse_program_resource_name returns -1 when the index is invalid this needs to
be tested before assigning the value to the unsigned array_index.
In link_varyings.cpp (the other place parse_program_resource_name is used) after
the -1 check is done the value is just assigned to an unsigned variable so it
seems long is just used so we can return the -1 rather than actually expecting
index values to be ridiculously large.
Reviewed-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
GLES 3.1 should be able to bind a texture with the target
GL_TEXTURE_2D_MULTISAMPLE.
Signed-off-by: Marta Lofstedt <marta.lofstedt@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
The anv_block_pool data structure suffered from the exact same race as the
state pool. Namely, that the uniqueness of the blocks handed out depends
on the next_block value increasing monotonically. However, this invariant
did not hold thanks to our block "return" concept.
The previous algorithm had a race because of the way we were using
__sync_fetch_and_add for everything. In particular, the concept of
"returning" over-allocated states in the "next > end" case was completely
bogus. If too many threads were hitting the state pool at the same time,
it was possible to have the following sequence:
A: Get an offset (next == end)
B: Get an offset (next > end)
A: Resize the pool (now next < end by a lot)
C: Get an offset (next < end)
B: Return the over-allocated offset
D: Get an offset
in which case D will get the same offset as C. The solution to this race
is to get rid of the concept of "returning" over-allocated states.
Instead, the thread that gets a new block simply sets the next and end
offsets directly and threads that over-allocate don't return anything and
just futex-wait. Since you can only ever hit the over-allocate case if
someone else hit the "next == end" case and hasn't resized yet, you're
guaranteed that the end value will get updated and the futex won't block
forever.
Earlier commit added an extra dup(fd) to fix a ZaphodHeads issue.
Although it did not consider the (very unlikely) case where we might end
up with the valid fd == 0.
Fixes: 28dda47ae4d(winsys/radeon: Use dup fd as key in drm-winsys hash
table to fix ZaphodHeads.)
Cc: 10.6 <mesa-stable@lists.freedesktop.org>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Mario Kleiner <mario.kleiner.de@gmail.com>
We have pools, so we should be using them. Also, I think this will help
keep valgrind from getting confused when we have to end up fighting with
system allocations such as those from malloc/free and mmap/munmap.