Vertex processing can be further subdivided into a few major blocks of processing: notably, transformation, lighting, and to a limited extent, texture coordinate transformation. Transformation involves taking position data as it's stored in a vertex structure and transforming it into a 'screenspace' position. 'Screenspace' refers to the 2D plane that represents the viewer's window onto the world. If you like, it's the front of the monitor in the real world, and the viewer's position in the game world.
The pixel rasterizer will take all of the information passed through from the vertex processor and compute a final pixel color, based on these values. A basic example of its usage might be to take the diffuse color and multiply it with the texture color (using the texture coordinates to retrieve a color from the current texture).
Here's a sample of a single vertex shader:
Technique T0{ Pass P0 { VertexShader = decl { stream 0; float v0[4]; // Position RH float v7[2]; // Texture Coord1 float v8[2]; // Texture Coord2 float v9[2]; // Texture Coord3 float v10[2]; // Texture Coord4 float v11[2]; // Texture Coord5 float v12[2]; // Texture Coord6 } asm { vs.1.1 def c0,1,1,0,0 def c1,2,2,1,1 sge r0.xzw,v0,c0 slt r0.y,v0,c0 mul r0,r0,c1 sub r0,r0,c0 mov oPos,r0 mov oT0,v7 }; }}