1
2
oh, and any chance we could get the source to the sine filter shader? Is it not properly respecting alpha channels or something? Trying to dig into an issue where it's fetching texels that are black (e.g., the clear color) despite the fetched pixels of the plane being empty
|
Your textures are not being rendered in a way that respects perspective merely by transforming a 2D triangle into 3D-to-2D coordinates. Affine transforms only allow linear mapping, whereas in perspective rendering the texture is stretched more for nearer pixels.
Take for example points A and B, representing the right edge of a quad with texture coords (1,0) and (1,1) respectively. The points A' and B' represent their conversion into 2D space. A is twice as far away from the viewer as B. The halfway point between them, P=(A+B)/2 which has texture coordinate (1,0.5), is only 50% farther away than B. The perspective-transformed point P' however is not equal to (A'+B')/2, because the perspective transform is A' = <A.x, A.y> * (z0/A.z). P' can be calculated as <A.x+B.x,A.y+B.y>/(A.z+B.z). The way the math shakes out, P' is closer to A' than to B' because A is farther away, and therefore the halfway point of the texture will appear closer to A. Because 2D affine transforms are strictly linear, transforming a textured triangle would put P and P' in the same place, breaking the 3D illusion. This is roughly the code for the wave (unbound) pixel shader: static const float PI = 3.14159265f; The bound version uses a different sampler that clamps the texture instead of going to 0 (full transparent) at the edges. blurWeight is used for the weight of each wave, blurOffset is for the phase, waveDir is the unit vector of the wave's direction in texture space, waveDir2 is the unit vector of the distortion (the same if not WAVE_SIDEWAYS), wavePeriod the period of the wave in texture space, and kernelSize is the number of waves. Up to 10 waves can be combined in the routine that processes the filters. |
Good gads, I'm an idiot. I just realized that now that waveDir and waveDir2 are separated, I can fold wavePeriod directly into waveDir in the preliminary calculations.
|
Like I said I manually calculate the textures. It works similarly to how the old SNES mode works with regards to scanlining (but in a more static fashion).
It's still pretty ugly without anistrophy, though, because the distance of the fragment to the camera is constant. (snip, figured out a workaround, not fault of shader) |
In response to Somepotato
|
|
Somepotato wrote:
It's still pretty ugly without anistrophy, though, because the distance of the fragment to the camera is constant. That's what Lummox was getting at with the perspective thing. |
In response to Ter13
|
|
I know. It's also why I haven't released anything regarding it, because the texture generation is the biggest performance hurdle.
It's probably cheaper to have a per pixel rasterizer; my 2600 emulator managed to get decent performance out of that so what could possibly go wrong? |
In response to Somepotato
|
|
Somepotato wrote:
I know. It's also why I haven't released anything regarding it, because the texture generation is the biggest performance hurdle. I had a lot of trouble with a rasterizer. The peak performance I could work out was roughly 4,000 pixels, but the side effects weren't worth it. I basically had to set the tile size to 1x1 in order to avoid the overhead of the unique positions of the pixels. That's running at 40fps with a 70% CPU load. Only about 3% of the CPU load is collision and raycasting. The rest is just appearance churn. I never quite could get sprite rendering to work nicely. The minute I started incorporating layers into the appearance scheme per pixel so I could easily just blit transformed sprites, the whole thing started shitting the bed. |
I wonder if strip based rendering would be better, but it'd also increase the amount of appearance churn. Hmmmmmm.
Clearly, we just need to petition for being able to stream entire framebuffers to clients (e.g. cheap updating of a screen sized /icon every frame. what could go wrong?) |
In response to Somepotato
|
|
Somepotato wrote:
I wonder if strip based rendering would be better, but it'd also increase the amount of appearance churn. Hmmmmmm. The original dawncaster uses strip-based rendering, the reason I opted to use per-pixel blitting was so that floors and ceilings could be handled. http://www.byond.com/games/Ter13/DawnCasterInfestation The source code is available in the discussion thread for that project if you wanna see how it worked. Most of the code is a modified port of Lode Van Der Venne's raycasting tutorial. |
1
2
The textures are calculated in DM at runtime based on a standard UV mapped texture image, so it is kinda a hack (and slow!). But it is proper perspective.
Problem is, it's pretty ugly with the low resolution/lack of anistrophy filtering. That'd add an entirely new workload to implement and at that point it'd be easier to write a softcode rasterizer into a 3D renderer.. but also much slower.
Also, every quad requires two appearances. The project is incredibly unrealistic for actual use lol, but it has cool factor. Multiplayer is implemented with Topic communication because multiple clients connected to this disaster crashes every client (so one server per client, but I just distribute a DMB to have people run)