Bug Fixes for the sched-perf Branch

I just pushed a rebased version of the sched-perf branch to a new branch called sched-perf-rebase at git://anongit.freedesktop.org/~tstellar/mesa

This new branch contains bug fixes for the old branch and has no piglit regression vs. master on my RC410 and RV515 cards. In fact this branch has +1 passes on both cards.

This new branch should reduce fragment shader program size by about 10-20%. Shaders with branches should see the most improvement. There are three major changes to the compiler that are driving these improvements.

The first change is that the dataflow analysis for the optimization passes has been unified in a single function: rc_get_readers() which saves us from having to redo dataflow analysis for every passes and made it really easy to add the new optimization passes in this branch.

Fragment shader instructions for R300-R500 cards are actually composed of two sub instructions: one vector and one scalar. The vector instruction writes to the xyz components of a register and the scalar instruction writes to the w component. Currently, in the master branch an instruction like: MOV Temp[0].x, Temp[1].x is treated as a vector instruction, since it writes to the x component. This wastes the vector unit on what is actually a scalar instruction. One of the optimizations I added converts MOV Temp[0].x, Temp[1].x to MOV Temp[0].w Temp[1].x which allows us to make use of the scalar unit and leaves the vector unit free for actual vector instructions. Since there are usually more vector instructions than scalar we can usually fill this empty vector slot with another instruction which reduces the overall program size by one.

The third big change is converting the code to a quasi static single assignment (SSA) form prior to instruction scheduling. SSA basically means that each register is only written once. The main advantage of SSA is that it makes dataflow analysis much easier, however in the r300 compiler we aren’t really using it for dataflow analysis. We are using it because it helps our scheduler do a better job pairing instructions and making use of the vector and scalar units on every cycle. I say quasi-SSA because you can’t really turn vector instructions into SSA unless you break them apart into individual scalar instructions. For example, with vector instructions you might run into cases like this:

MOV Temp[4].x, Temp[5].x
MOV Temp[4].y, Temp[6].x
MOV Temp[7].xy, Temp[4].xy

In true SSA, each register is only written one time so we would need to rewrite the 2nd instruction like this:

MOV Temp[4].x, Temp[5].x
MOV Temp[5].y, Temp[6].x
MOV Temp[7].xy, Temp[4].xy

Oops, now we broke the program. Instruction 3 reads from Temp[4].x, but that component is never written. We could change instruction 3 to
MOV Temp[7].xy, Temp[5].xy, but then it would read from Temp[5].y which isn’t written either. So, in the r300 compiler we convert everything to SSA unless we see code like the example above. In that case we just ignore it and don’t bother trying to rewrite it.

As I mentioned earlier, these compiler optimizations reduce program size by about 10 – 20% Here is an example from the piglit test glsl-fs-atan3:

Category master sched-perf-rebase fglrx
Total Instructions 111 93 60
Vector Instructions 81 65 47
Scalar Instructions 27 37 47
Flow Control Instructions 20 20 7
Presubtract Operations 3 4 4
Temporary Registers 10 9 6

The fglrx results come from the AMD Shader Analyzer v1.42.

So about a 15% decrease in shader size for this test, but we are still quite far away from fglrx. The good news is, however, that I can see lots of areas for improvement. The big gap between the r300 compiler and fglrx is mostly because the way we use flow control instructions is very inefficient, and in this shader, it costs us about 16 instructions. There are a few other optimization we could be doing better too.

I’m really not a GPU performance expert, so I don’t know how smaller shader programs will translate to better performance at least in terms of frames per second. Smaller shaders means less data needs to be submitted to the graphics processor so that should help, but I think most of the performance bottlenecks are other places in the driver.

I’m going to do more testing of the sched-perf-rebase branch before I merge it with master, but I feel pretty good about it now. Also, as a bonus while working on these performance improvements I found and fixed 5 non-performance related bugs, which I hope will resolve some of the outstanding r300g fdo bugs.

r300 Compiler Optimization Improvements

I just pushed a branch called sched-perf to git://anongit.freedesktop.org/~tstellar/mesa
It contains various optimization improvements:

  • Handling of flow control instructions in dataflow analysis.
  • More aggressive use of presubtact operations.
  • Some scheduler improvements.

I’m seeing about a 10% decrease in shader program size in most piglit tests with this branch, but I haven’t done much testing with real applications. I added a debug option a few weeks ago for dumping shader stats (RADEON_DEBUG=pstat), which I’ve been using with piglit and is helpful for comparing compiler performance between different branches.

r300 Compiler Bug Reporting

Here are some tips for filing a good bug report:

Step 1: Is it a vertex or a fragment shader?
If running the program with RADEON_NO_TCL=1 fixes the problem then it is probably a vertex shader that is broken, if it doesn’t then it is probably a bad fragment shader.

Step 2: Does running with RADEON_DEBUG=noopt help?
If it does than the bug is in one of the optimization passes. If it doesn’t help then your bug is in the main part of the compiler.

Step 3: Collect debug output.
There are 3 debug options that are useful collecting debug output: FP,VP,PSTAT. FP dumps debug output for fragment shaders, VP dumps debug output for vertex shaders, and PSTAT dumps statistics for each compiled shader. Here are the debug logs you should attach to a bug:

Does noopt fix it? Fragment Shader Vertex Shader
Yes pstat,fp,noopt
No pstat,fp,noopt pstat,vp,noopt