Motivation
Everybody knows how to generate a polygonal sphere... but not everybody knows making good polygonal spheres. I mean, spheres that are usable for rendering
with a rasterizer using, say, a texture, or a normal map. Often one resorts to the polar coordinates to construct the sphere and the texture mapping coords,
since the polar coordinates are the natural parametrization of a sphere. The problem is that that parametrization has
singularities in the poles, and the mapping gets quite contractive (high derivative) on the surroundings of the poles, making it useless for texture
mapping (the texture gets streched a lot there). It also causes thin triangles to appear on the poles (unless a nonuniform samping of the 2 parameters
is used), as it can be noted on the image on the right. There are even more disadvantages with the polar parametrization; for example, it involves
trigonometric functions, meaning that if one wants to compute tangents to the surface (for normal mapping for example) it has to resort to more
trigonometric functions, that are usually too expensive to be abused on realtime shaders.
So, while the natural parametrization of the sphere is the polar one, we will probably want to use something else than the matematically obvious way
of thinking. Here, we will use a picewise algebraic parametrization (good for fast shader execution) that will suffer less
parameter space distortion (good for texturing). On top of that, almost everybody having programmed a cubemap texture fetch is familiar with it.




A cube 
The cube after normalization 

The idea is simple, take a cube, and "normalize" it so each point on the surface is moved towards the center of the cube so that it's distance gets being one. In
other words, generate a cube with many vertices, and normalize each of them. The trick will work of course with any concave shape, but a cube not being
the simplest one (a thetaedra) is basic enough and it's very easy to manage.
For example, since each face of the cube has a simple (and natural) rectangular parametrization (0,1)x(0,1), we get automatically a simple parametrization
domain for the sphere too. The rest of the article is about this parametrization, and to the way one can extract the corresponding tangent space basis vectors
analytically, in an exact and cheap way, even per pixel if necessary.


The sphere description
We start by taking one of the faces of the cube (actually all the article will deal with one of the six faces, the other faces follow identical steps). For
example the +z face (I assume OpenGL coordinate system, ie, the one you learn at school : x=right, y=up, z=towards you). Let's call the two parameters of
the face surface domain s and t, with values in the interval [0,1]. We want our cube to be centered in the origin, and ranging
on the [1,1] values, so our surface points p are expressed as:
next we normalize this point on the surface of the cube to get a point q on the surface of the sphere:
that can be rewrite as with
The inverse mapping (the one that goes from q to parameter space) is equivalent to cast a
ray with direction q into the plane z=1, and takes the form
and that's actually what the cubemapping units of gpus do to fetch the texel data.

The real deal
Now that we understand what the sphere really is, it's time to compute it's tangent space. Actually, we will compute a tangent space basis such that it's
basis vectors follow the texture coordinate parametrization, that's what you need to do normal/bump mapping. Basically, the surface tangent vectors
will be aligned to the directional derivatives of the sphere (think about it as follows: when you slightly change s or t the
point q moves, and the difference between the new and old position of q is your tangent vector... now rewrite the statement with
infinitely small changes in s or t, and you get the derivative). The derivative with repsect to s will give a tangent u, and the derivative respect
to t will give the second tangent v (sometimes called binormal). Now take care, these two vector will not necesarily be orthogonal
to each other, although they will be linearly independant, will so define a tangent plane to the sphere, and will of course be orthogonal to the normal
n to the surface.
We start the maths doing
as before, and derivating repecto to s and t to get u and v:


with 



of course. So, 
If only the direction of the tangent basis vectors are of interest (for example because the application needs normalized tangent space vectors),
then one can skip the common factor to the three components. By replacing variables the expression gets simpler to code:
Note that these two vectors are not perpendicular (check for
), although their cross product , what means that points indeed on the direction of the surface normal n.
Implementation
Finally the implementation is easy, very few computations are needed, none of them complex. The image on the right shows the u and
v vectors computed as just explained. Please note that these tangent vectors are exact, they are not based on differencing
vertex positions or texcoords, they are analytically correct, what means that one can get per pixel exact tangent basis, if needed. In that case,
one can move from the vertex shader down to the pixel shader s and t (only two floats, half interpolator!) and evaluate
u and v, or perhapse compute those in the vertex shader and pass them to the pixel shader (6 floats...). Take also into
account that u and v are not othogonal to each other, so some tricks as transposing the tangent matrix will not work (you
will have to compute the inverse, or better, do the bump lighting computations in object space!).




iņigo quilez 2008
