Geometry Processing
 
 
 

By default the geometry processing phase is referred to as rendering, though technically rendering is both geometry processing and shading.

Geometry processing is performed by a plug-in which derives from the class MeshRenderer. The renderer is responsible for pushing geometry data into the OpenGL pipeline, created by the shader.

The default renderer splits the surface into smaller pieces by grouping together nearby faces which are pushed into the pipeline one by one. Each piece may have its own material parameters and textures. No pieces are overlapping different texture tiles.

During rendering those pieces which are not visible on the screen or are not touching the dirty area on the screen may be skipped in order to speed up rendering.

If a piece has less than 64,000 different vertices, it will use 16 bit vertex indices to reduce video memory usage.

If any vertex data changes (for example vertex positions change when user sculpts) only the vertex buffer object of the affected pieces are changed.

The vertex and index data for each piece is stored in the video memory so rendering these pieces is very fast. The code which splits the surface into these pieces will run only when the topology of the mesh is changed, which is very rare.