Bat Removal This site will look much better in a browser that supports web standards, but it is accessible to any browser or Internet device.

Semantic AudiovisuaL Entertainment
Reusable Objects

style element

CGI Terms

2D Graphics
Displayed representation of a scene or an object along two axes of reference: height and width (x and y).
3D Graphics
Displayed representation of a scene or an object that appears to have three axes of reference: height, width, and depth (x, y, and z).
3D Pipeline
The process of 3D graphics can be divided into three-stages: tessellation, geometry, and rendering. In the tessellation stage, a described model of an object is created, and the object is then converted to a set of polygons. The geometry stage includes transformation, lighting, and setup. The rendering stage, which is critical for 3D image quality, creates a two dimensional display from the polygons created in the geometry stage.
Alpha Blending
The real world is composed of transparent, translucent, and opaque objects. Alpha blending is a technique for adding transparency information for translucent objects. It is implemented by rendering polygons through a stipple mask whose on-off density is proportional to the transparency of the object. The resultant colour of a pixel is a combination of the foreground and background colour. Typically, alpha has a normalized value of 0 to 1 for each colour pixel. New pixel = (alpha)(pixel A colour) + (1 - alpha)(pixel B colour)
Alpha Buffer
An extra Colour channel to hold transparency information; pixels become quad values (RGBA). In a 32-bit frame buffer there are 24 bits of colour, 8 each for red, green, and blue, along with an 8-bit alpha channel.
The illusion of motion created by the consecutive display of images of static elements. In film and video production, this refers to techniques by which each frame of a film or movie is produced individually. These frames may be generated by computers, or by photographing a drawn or painted image, or by repeatedly making small changes to a model, and then photographing the result with a special animation camera. When the frames are strung together and the resulting film is viewed, there is an illusion of continuous movement due to the phenomenon known as persistence of vision. Generating such a film tends to be very labour intensive and tedious, though the development of computer animation has greatly sped up the process. Graphics file formats like GIF, MNG, SVG and Flash allow animation to be viewed on a computer or over the Internet.
Anti-aliasing is sub pixel interpolation, a technique that makes edges appear to have better resolution.
Atmospheric Effect
Effects, such as fog and depth cueing, that improve the rendering of real-world environments.
A Bitmap is a pixel by pixel image.
Bilinear Filtering
Bilinear filtering is a method of anti-aliasing texture maps. A texture-aliening artefact occurs due to sampling on a finite pixel grid. Point-sampled telexes jump from one pixel to another at random times. This aliening is very noticeable on slowly rotating or moving polygons. The texture image jumps and shears along pixel boundaries. To eliminate this problem, bilinear filtering takes a weighted average of four adjacent texture pixels to create a single telex.
Bit depth
This refers to the grey scale range of an individual pixel. A pixel with 8 bits per colour gives a 24 bit image (8 bits X 3 colours is 24 bits). CCD sensors are coloured in a pixel by pixel method. 30/32 bit colour is billions of colours. 24 bit colour resolution is 16.7 million colours. 16 bit colour is 65,536 colours. 8 bit colour is 256 colours. 8 bit grey scale is 256 shades of grey. 1 bit is black or white.
Blending is the combining of two or more objects by adding them on a pixel-by-pixel basis.
Blinn shading
A method of computing the shading of three-dimensional surfaces developed by James Blinn. It uses four characteristics: diffusion, specularity, eccentricity, and refractive index. It is similar to Phong shading, except that it creates a slightly different highlight that is useful for defining rough or sharp edges. James Blinn’s model for specular reflection; an alternative way to calculate the specular term, which eliminates the expensive reflection vector calculations. To do this, Blinn introduced the half-angle vector, which is a vector halfway between the light vector and the view vector.
The Brazil Rendering System, known as "Brazil R/S", is Splutterfish's Steve Blackmon and Scott Kirvan's high quality production renderer with full global illumination, advanced ray tracing, subsurface light scattering support.
Chroma Keying
Chroma Keying or texture transparency is the ability to recognize a key colour within a texture map and make it transparent during the texture mapping process. Since not all objects are easily modelled with polygons, chroma keying is used to include complex objects in a scene as texture maps.
Computer Generated Imagery; Imagery generated by various computer based graphical tools, which is composited with real imagery shot on set.
Commission Internationale de l'Eclairage. An international committee for the establishment of colour standards. The CIE model and the CIE Chromaticity Diagram define the different variations of colour. In the study of the perception of colour, one of the first mathematically defined colour spaces was the CIE XYZ colour space (also known as CIE 1931 colour space), created by the International Commission on Illumination (CIE) in 1931.
The human eye has receptors for short (S), middle (M), and long (L) wavelengths, also known as blue, green, and red receptors. That means that one, in principle, needs three parameters to describe a colour sensation. A specific method for associating three numbers (or tristimulus values) with each colour is called a colour space, of which the CIE XYZ colour space is one of many such spaces. However, the CIE XYZ colour space is special, because it is based on direct measurements of the human eye, and serves as the basis from which many other colour spaces are defined.
The CIE XYZ colour space was derived from a series of experiments done in the late 1920's by W. David Wright (Wright 1928) and John Guild (Guild 1931). Their experimental results were combined into the specification of the CIE RGB colour space, from which the CIE XYZ colour space was derived. This article is actually concerned with both of these colour spaces.
In the CIE XYZ colour space, the tristimulus values are not the S, M, and L stimuli of the human eye, but rather a set of tristimulus values called X, Y, and Z, which are also roughly red, green and blue, respectively. Two light sources may be made up of different mixtures of various colours, and yet have the same colour (metamerism). If two light sources have the same apparent colour, then they will have the same tristimulus values, no matter what different mixtures of light were used to produce them.
Colour Model
An abstract mathematical model describing the way colours can be represented as tuples of numbers, typically as three or four values or colour components (e.g. RGB and CMYK are colour models). However, a colour model with no associated mapping function to an absolute colour space is a more or less arbitrary colour system with little connection to the requirements of any given application.
Adding a certain mapping function between the colour model and a certain reference colour space results in a definite "footprint" within the reference colour space. This "footprint" is known as a gamut, and, in combination with the colour model, defines a new colour space. For example, Adobe RGB and sRGB are two different absolute colour spaces, both based on the RGB model.
A process that reduces the number of bytes required to define a document in order to save disk space or transmission time. Compression is achieved by replacing commonly occurring sequences of pixels with shorter codes. Some compression methods--like JPEG, throw away some data seeking only to preserve the appearance of the image. Others—like Group-IV, preserve all of the original information.
Depth Cueing
Depth cueing is the lowering of intensity as objects move away from the viewpoint.
Dithering is a technique for archiving 24-bit quality in 8 or 16-bit frame buffers. Dithering uses two colours to create the appearance of a third, giving a smooth appearance to an otherwise abrupt transition.
Filed dominance defines whether a field type 1 or type 2 represents the start of a new TV frame.
Dope sheet
List of scenes from the script that have already been filmed, or a list of the contents of an exposed reel of film stock.
Double Buffering
A method of using two buffers, one for display and the other for rendering. While one of the buffers is being displayed, the other buffer is operated on by a rendering engine. When the new frame is rendered, the two buffers are switched. The viewer sees a perfect image all the time.
In video, a field is one of the many still images which comprise a moving picture. They are similar to frames, but they have half the vertical resolution and are displayed twice as fast. The method of converting from frames to fields is called interlacing, and the method of converting fields to frames is called de-interlacing. A video composed of fields is interlaced, and a video composed of frames is progressive.
A vector-based animation technology. Flash animations are quick to download, are of high quality, and are browser independent (they look the same on different browsers). Flash animations also scale to fit the browser window. Flash animations are created using Macromedia Flash software.
Flat Shading
The flat shading method is also called constant shading. For rendering, it assigns a uniform colour throughout an entire polygon. This shading results in the lowest quality, an object surface with a faceted appearance and a visible underlying geometry that looks 'blocky'.
Fog is the blending of an object with a fixed colour as its pixels become farther away from the viewpoint.
Forward kinematics
A type of movement that is invoked by rotating a joint in the chain, allowing for complete control of the chain's behaviour. Only the angle of the selected joint is affected; all other joint angles are preserved. With FK, positioning a skeleton's foot means rotating each joint in the leg, from the hip to the ankle. This method is more tedious to execute but the chain bends exactly as intended. Using FK allows you to create many types of movements that may not be possible to animate with inverse kinematics (IK) alone.
In film, video production, animation, and related fields, a frame is one of the many still images which compose the complete moving picture. Historically, these were recorded on a long strip of photographic film, and each image looked rather like a framed picture when examined individually, hence the name.
When the moving picture is displayed, each frame is flashed on a screen for a short time (nowadays, usually 1/24th, 1/25th or 1/30th of a second) and then immediately replaced by the next one. Persistence of vision blends the frames together, producing the illusion of a moving image.
The video frame is also sometimes used as a unit of time, being variously 1/24, 1/25 or 1/30 of a second, so that a momentary event might be said to last 6 frames.
The frame rate, the rate at which sequential frames are presented, varies according to the video standard in use. In North America and Japan, 30 frames per second is the broadcast standard, with 24 frame/s now common in production for high-definition video. In much of the rest of the world, 25 frame/s is standard.
Function curve
Or fcurve; a graphic representation of the relationship between time and property values (animation). Function curves are represented as a curve with points in an XY grid, where time is on the X, or horizontal, axis and values are on the Y, or vertical, axis. You can edit the function curves that represent your animation in the animation editor.
Geometry buffer; the function of the Geometry Buffer is to isolate properties of your 3D objects and create channel masks that are attached to the rendered file and can be used in paint or compositing applications. Because of the added property information, G-Buffer rendering cannot be used with certain image formats, which have no place to append channel data.
Tangent of the angle of the straight-line portion of a density curve. The characteristics of displays using phosphors (as well as some cameras) are nonlinear. A small change in voltage when the voltage level is low produces a change in the output display brightness level; but this same small change in voltage at a high voltage level will not produce the same magnitude of change in the brightness output. This effect, or actually the difference between what you should have and what you actually measured, is known as gamma.
Gamma Correction
Before being displayed, linear RGB data must be processed (gamma corrected) to compensate for the gamma (nonlinear characteristics) of the display.
Global Illumination
A physical simulation of all lighting in a scene. This includes both direct and indirect lighting caused by diffuse reflections. Photon rays emanate from the light source in all directions and bombard the scene. When these photons hit an object, some stick and others are reflected and refracted. These reflected and refracted rays go on to illuminate other surfaces. For example, the underside of a table is not completely dark, even if it is not illuminated directly by a light or reflections from a shiny surface.
Gouraud Shading
Gouraud shading, one of the most popular smooth shading algorithms, is named after its French originator, Henri Gouraud. Gouraud shading, or colour interpolation, is a process by which colour information is interpolated across the face of the polygon to determine the colours at each pixel. It assigns colour to every pixel within each polygon based on linear interpolation from the polygon's vertices. This method improves the 'blocky' (see Flat Shading) look and provides an appearance of plastic or metallic surfaces.
Hidden Surface Removal
Hidden Surface Removal or visible surface determination entails displaying only those surfaces that are visible to a viewer because objects are a collection of surfaces or solids.
High Dynamic Range Images
(HDRI) A set of techniques that allow a far greater dynamic range of exposures than normal digital imaging techniques. The intention is to accurately represent the wide range of intensity levels found in real scenes, ranging from direct sunlight to the deepest shadows.
This provides the opportunity to shoot a scene and have total control of the final imaging from the beginning to the end of the photography project. An example would be that it provides the possibility to re-expose. One can capture as wide a range of information as possible on location and choose what is wanted later.
Gregory Ward is widely considered to be the founder of the file format for high dynamic range imaging. The use of high dynamic range imaging in computer graphics has been pioneered by Paul Debevec. He is considered to be the first person to create computer graphic images using HDRI maps to realistically light and animate CG objects.
When preparing for display, a high dynamic range image is often tone mapped and combined with several full screen effects.
In inter-frame compression schemes (e.g., MPEG), the key frame or reference video frame that acts as a point of comparison to p- and b-frames, and is not rebuilt from another frame. Opposite B frame and P frame.
Image Based Lighting
A technique which uses a high dynamic range image as a light source during rendering. By capturing a high dynamic range image (or series of such images to represent an environment map) of a real scene, it is possible to illuminate rendered objects using that data, so that the rendered objects will match the lighting conditions of the original scene exactly. This produces an extremely realistic looking image, as there will be no discrepancies between the lighting conditions in the real and computer-generated portions of the final image.
Inverse kinematics
The movement and rotation of a chain according to the location of the chain's effector. When using IK, translating a bone or the effector recalculates the previous joint angles in the chain. With IK, positioning a skeleton's foot is a matter of moving the foot to the right spot-how the leg should bend is calculated. To animate the foot, you key frame its translation.
Interpolation is a mathematical way of regenerating missing or needed information. For example, an image needs to be scaled up by a factor of two, from 100 pixels to 200 pixels. The missing pixels are generated by interpolating between the two pixels that are on either side of the pixel that needs to be generated. After all of the 'missing' pixels have been interpolated, 200 pixels exist where only 100 existed before, and the image is twice as big as it used to be.
Joint Photographic Experts Group. JPEG is a standards committee that designed an image compression format. The compression format they designed is known as a lossy compression, in that it deletes information from an image that it considers unnecessary. JPEG files can range from small amounts of lossless compression to large amounts of lossy compression. This is a common standard on the WWW, but the data loss generated in its compression make it undesirable for printing purposes.
Lambert Shader
A shading model based on the application of Lambert's cosine law, which deals with the intensity of reflected light, discovered in the 16th century by Johann Lambert. Objects are shaded to create a matte surface with no specular highlights. Two illumination areas are defined on the object's surface: ambient and diffuse. Lambert shading allows reflectivity, transparency, refraction, and texture to be applied to the object.
There are many techniques for creating realistic graphical effects to simulate a real-life 3-D object on a 2-D display. One technique is lighting. Lighting creates a real-world environment by means of rendering the different grades of darkness and brightness of an object's appearance to make the object look solid.
Line Buffer
A line buffer is a memory buffer used to hold one line of video. If the horizontal resolution of the screen is 640 pixels and RGB is used as the colour space, the line buffer would have to be 640 locations long by 3 bytes wide. This amounts to one location for each pixel and each colour plane. Line buffers are typically used in filtering algorithms.
In animation, to move the lips in synchronization (with recorded speech or song)
Mental Ray
mental ray is a production quality rendering application developed by mental images (Berlin, Germany). As the name implies, it supports ray tracing to generate images.
MIP Mapping
Multum in Parvum (Latin) means 'many in one'. A method of increasing the quality of a texture map by applying different-resolution texture maps for different objects in the same image, depending on their size and depth. If a texture-mapped polygon is smaller than the texture image itself, the texture map will be under sampled during rasterisation. As a result, the texture mapping will be noisy and 'sparkly'. The purpose of MIP mapping is to remove this effect.
(Moving Picture Experts Group) - Pronounced m-peg, a working group of ISO. The term also refers to the family of digital video compression standards and file formats developed by the group. MPEG generally produces better-quality video than competing formats, such as Video for Windows, Indeo and QuickTime. MPEG files can be decoded by special hardware or by software
Non-Uniform Rational B-Spline. A type of curve or surface for which the delta (difference) between successive knots need not be expressed in uniform increments of 1. This non-uniformity distinguishes NURBS from other curve types.
The effect of one object in 3-D space blocking another object from view.
Open EXR
OpenEXR is an open-source high-dynamic-range image file format that was developed by Industrial Light & Magic. Pixel data are stored as 16-bit or 32-bit floating-point numbers. With 16 bits, the representable dynamic range is significantly higher than the range of most image capture devices: 109 or 30 f-stops without loss of precision, and an additional 10 f-stops at the low end with some loss of precision. Most 8-bit file formats have around 7 to 10 stops.
Palletised Texture
Palletised Texture means compressed texture formats, such as 1-, 2-, 4-, and 8-bit instead of 24-bit; this allows more textures to be stored in less memory.
Perspective Correction
A particular way to do texture mapping; it is extremely important for creating a realistic image. It takes into account the effect of the Z value in a scene while mapping texels onto the surface of polygons. As a 3D object moves away from the viewer, the length and height of the object become compressed, making it appear shorter. Without perspective correction, objects will appear to shift and 'tear' in an unrealistic way. True perspective correction is that the rate of change per pixel of texture is proportional to the depth. Since it requires a division per pixel, perspective correction is very computing intensive.
In a video compression scheme, the difference between the complete subject frame and its predecessor. This is the most compact amount of data needed to generate the subject frame given the complete frame preceding, but it does not allow backwards single step or playback.
Phong Shading
Phong shading is a sophisticated smooth shading method, originated by Phong Bui-tuong. The Phong shading algorithm is best known for its ability to render precise, realistic specula highlights. During rendering, Phong shading achieves excellent realism by calculating the amount of light on the object at tiny points across the entire surface instead of at the vertices of the polygons. Each pixel representing the image is given its own colour based on the lighting model applied at that point. Phong shading requires much more computation for the hardware than Gouraud shading.
The process of reducing three dimensions to two dimensions for display is called Projection. It is the mapping of the visible part of a three dimensional object onto a two dimension screen.
In digital signal processing, quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values) by a relatively-small set of discrete symbols or integer values. More specifically, a signal can be multi-dimensional and quantization need not be applied to all dimensions. A discrete signal need not necessarily be quantized (a pedantic point, but true nonetheless and can be a point of confusion).
A rendering technique where all surfaces are considered capable of emitting light, although only true light sources do this at first. This light, when it falls on some surface effects the light that surface emits. Increasing the number of iterations allows the quality of the resulting image to be improved. There method may be used for specular as well as diffuse reflections.
Translating an image into pixels.
Ray Tracing
Ray tracing is a realistic method for rendering images (or frames) constructed in 3D computer graphics environments. It works by tracing the path taken by a ray of light through the scene, and calculating reflection, refraction, or absorption of the ray whenever it intersects an object in the world.
The process whereby a surface turns back a portion of the radiation that strikes it.
Deflection from a straight path undergone by a light ray or energy wave in passing obliquely from one medium (as air) into another (as glass) in which its velocity is different.
The process of creating life-like images on a screen using mathematical models and formulas to add shading, colour, and lamination to a 2D or 3D wire frame.
Rendering Engine
"Rendering Engine" generically applies to the part of the graphics engine that draws 3D primitives, usually triangles or other simple polygons. In most implementations, the rendering engine is responsible for interpolation of edges and "filling in" the triangle.
RenderMan is an API developed by Pixar Animation Studios to describe three dimensional scenes and turning them into digital photorealistic images. The full name is the RenderMan Interface Specification.
RenderMan also is the part of the name of a rendering software package developed by Pixar which implements this API.
For pictures or imager data, indicates the number of pixels per unit of length; often measured in dpi (dots per inch); said to be 'higher' resolution as dpi increases, showing more image detail
A technique where animators trace live action movement, frame by frame, for use in animated cartoons. Originally, pre-recorded live-film images were projected onto a matte windowpane and redrawn by an animator. This projection equipment is called a Rotoscope.
Scissors Clip
Test pixel coordinates against clip rectangles and reject them if outside.
Set-up Engine
A set-up engine allows drivers to pass polygons to the rendering engine in the form of raw vertex information, subpixel polygon addresses. Whereas, most common designs force the host CPU to pre-process polygons for the rendering engine in terms of delta values for edges, colour, and texture. Thus, a set-up engine moves processing from the host CPU to the graphics chip, reducing bus bandwidth requirements by 30% for small, randomly placed triangles and by proportionately more for larger polygons.
A program used to determine the final surface properties of an object or image. This can include arbitrarily complex descriptions of light absorption and diffusion, texture mapping, reflection and refraction, shadowing, surface displacement and post-processing effects.
This is the process where a model is wrapped around a skeleton. When the skeleton moves, the model will move correspondingly. The model effectively forms a skin over the skeleton joints.
In raster graphics architecture a primitive is formed by scan conversion where each scan line intersects the primitive at two ends, P left and P right. A contiguous sequence of pixels on the scan line between P left and P right is called a Span. Each pixel within the span contains the z, R, G, and B data values.
A spline is a curve in 3D space defined by control points. Splines can be cut or joined at their edit points. Some common types of splines include BÉZIER, B-SPLINE, and NURBS.
Subdivision surface
A surface that results from repeatedly refining a polygonal mesh to create a finer and finer mesh. A subdivision step refines a submesh into a supermesh by inserting more vertices. The positions of the vertices of the supermesh are computed from the positions of the vertices of the submesh, based on a certain subdivision scheme (note that there are several subdivision surface algorithms).
Subdivision surfaces can exist on an arbitrary topology, and look smooth and continuous. They enable the creation of a hierarchy of many levels of detail, allowing highly detailed modeling in isolated areas and binding at the base levels.
Sub-surface scattering
Subsurface scattering is a mechanism of light transport in which light penetrates the surface of a translucent object, is scattered by interacting with the material, and exits the surface at a different point. Subsurface scattering is important in 3D computer graphics. In particular, materials such as marble, skin, and milk are extremely difficult to simulate realistically without taking subsurface scattering into account.
Processing 3D graphics can be pipelined into three-stages: tessellation, geometry, and rendering. Tessellation is the process of subdividing a surface into smaller shapes. To describe object surface patterns, tessellation breaks down the surface of an object into manageable polygons. Triangles or quadrilaterals are two usually used polygons in drawing graphical objects because computer hardware can easy manipulate and calculate these two simple polygons.
An object divided into quads and subdivided into triangles for convenient calculation.
Texture Anti-aliasing
An interpolation technique used to remove texture distortion, staircasing or jagged edges, at the edges of an object.
Texture Filtering
Removing the undesirable distortion of a raster image, also called aliasing artefacts, such as sparkles and blockiness, through interpolation of stored texture images.
Texture Mapping
Texture mapping is based on a stored bitmap consisting of texture pixels, or texels. It consists of wrapping a texture image onto an object to create a realistic representation of the object in 3D space. The object is represented by a set of polygons, usually triangles. The advantage is complexity reduction and rendering speed, because only one texel read is required for each pixel being written to the frame buffer. The disadvantage is the blocky image that results when the object moves.
Timecode is a signal that contains a chronological record of the absolute time in a recording. It is used for synchronizing different recorders. It can be used for electronic editing. Timecode was initially invented for the motion picture business, as a method of synchronizing the pictures recorded in the frames of a camera to the sound recorded on tape recorder. The 02R supports three types of timecode, SMPTE timecode, MTC (MIDI timecode), and Internal timecode.
Change of coordinates; a series of mathematical operations that act on output primitives and geometric attributes to convert them from modelling coordinates to device coordinates.
Tri-linear Filtering
Based on bilinear filtering, trilinear filtering takes the weighted average of two levels of bilinear filtering results to create a single telex. The resultant graphics image is smoother and less flashy.
Tri-linear MIP Mapping
A method of reducing aliasing artefacts within texture maps by applying a bilinear filter to four texels from the two nearest MIP maps and then interpolating between the two.
Tristimulus theory
Television relies on the relatively well understood theories of tristimulus and opponent color. What we perceive as a color is actually a mixture of different wavelengths of light across the entire spectrum of visible frequencies. The mixture is known as a spectral power distribution (SPD). The theory of tristimulus color is based on the fact that the human eye perceives color as a degree of stimulation of three different sensors on the retina. The sensors, called cones, have colored filters that allow only certain ranges of light frequencies to stimulate them. Theoretically, any color can be reproduced by stimulating these cones exactly as the original SPD does. The three different cones are stimulated by light frequencies that roughly correspond to the colors we call Red, Green, and Blue – the additive primaries.
Graphics for which the display images are generated from coordinates, as opposed to an array of pixels. A common class of graphics, where all vector output consists of lines and curves drawn point-to-point by the output unit as ordered by the computer.
A part of off-screen memory that holds the distance from the viewpoint for each pixel, the Z-value. When objects are rendered into a 2D frame buffer, the rendering engine must remove hidden surfaces.
A process of removing hidden surfaces using the depth value stored in the Z-buffer. Before bringing in a new frame, the rendering engine clears the buffer, setting all Z-values to 'infinity'. When rendering objects, the engine assigns a Z-value to each pixel: the closer the pixel to the viewer, the smaller the Z value. When a new pixel is rendered, its depth is compared with the stored depth in the Z-buffer. The new pixel is written into the frame buffer only if its depth value is less than the stored one.
A process of removing hidden surfaces by sorting polygons in back-to-front order prior to rendering. Thus, when the polygons are rendered, the forward-most surfaces are rendered last. The rendering results are correct unless objects are close to or intersect each other. The advantage is not requiring memory for storing depth values. The disadvantage is the cost in more CPU cycles and limitations when objects penetrate each other.