![]() ![]() If you send me a private message with your email, I'll send you a description of the correct method. Your pseudocode seems to describe something slightly different from what your saying, but that also looks wrong. A point that is off the ground can still be shadowed by a higher point, yet that first point will return an offset. Your lighting algorithm is not quite right, though. Glad to see someone else was thinking on the same wavelength, and even implemented the ideas!ĭid you use HP's fitter program? If so, how did you figure out the format they used in the PTM file? If not, what method did you use? I was talking about using PTMs in the same way, as well as using the same map for shadowing. your first post), but it's good to have a another competent developer in here.ĭid you notice my posts in the other "offset mapping" thread? I don't remember you joining the forum (i.e. This parallax effect seems like it will be the nextgen lens flare - but more useful It's interesting that I was surprised that the original demo that was mentioned in the thread worked so well - but then when you think about it the approximation works so well as the heights are relatively small (mentioned in the thread) but also the texels that occlude will always be of at _least_ the same height of the texel that is occluded.Ī future where you have some mad application that you give your real world sampled data to, that then goes off and decides *per* texel or per surface, or per vertex for particularly bland un-interesting surfaces what approximation function to use is not far away - indeed I believe the game STALKER is already leading the way in this kind of direction but not down to such a fine grain level yet. This should give you a better approximation (as your function is now approximating a function with less parameters and you have introduced more sample data) at the cost of an increase in memory usage. Your approximating function is a function of theta - second angle of the eye/light vectors spherical co-ordinates that goes between 0 and 180 degrees. Where (u, v) is the current texel coordinate and phi is the angle of the eye/light vectors coordinate in spherical coordinates that goes from 0 to 360 degrees. ![]() ![]() You have some number of 3d textures that contain the coefficients of whatever approximating function you wish to use - each axis represents u, v, and phi respectively. So in those cases I suggest as a quality improvement that you try this: You could argue that in certain cases of surfaces the approximated PTM wouldn't work very well. Some people may prefer to use Spherical Harmonics instead of PTMs. With the cheap approximation given in the thread ( ) you could of course try and compute self shadowing with that as well - but I'm not sure how well that would work out. Pixel is not self shadowed - it can see the light! Vec2 occluderOccluderUV = PTMFunc(occluderUV, tsLight) Pseudo pixel shader code would roughly be: So replace the eye vector with the light vector and if your scale offset comes out as zero the pixel is not self shadowed(!). ![]() This is because given a direction vector this function is telling what is _occluding_ the current texel. It gets even more awesome when you realise that this gives you your self shadowing as well - no need for horizon maps (humus!). Thats really cool as that only takes 6 coefficents per texel so you only need two three component textures for example. Then it dawned on me (doh) that as you already have the eyes direction you can simply use one PTM function that gives you a scale to apply to the eye vector after it has been projected onto the texture space plane. To start with I stupidly had two PTMs, one for both a u offset and v offset. I've started to play with the idea of using PTMs (Polynomial Texture Maps - ) to encode the offsets used in the parallax mapping (called offset mapping by some) technique thats all the rage at the moment ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |