This page is a wiki. Please login or create an account to begin editing.


ATI SmartShaders Demos 2

Rating:
Your rating: None Average: 5 (1 vote)
Category:
Year released:
Author:
Publisher:
#1
[www].se [ftp].se [mirror].us
doublecross.dmg (245.27 MB)
For Mac OS X
#2
[www].se [ftp].se [mirror].us
atisubsurface.dmg (105.17 MB)
For Mac OS X
#3
[www].se [ftp].se [mirror].us
crowd.dmg (164.84 MB)
For Mac OS X
#4
[www].se [ftp].se [mirror].us
carpaint.dmg (31.39 MB)
For Mac OS X
#5
[www].se [ftp].se [mirror].us
animusic.dmg (82.81 MB)
For Mac OS X
#6
[www].se [ftp].se [mirror].us
bear.dmg (24.14 MB)
For Mac OS X
#7
[www].se [ftp].se [mirror].us
chimp.dmg (67.36 MB)
For Mac OS X
#8
[www].se [ftp].se [mirror].us
debevecrnl.dmg (15.99 MB)
For Mac OS X
#9
[www].se [ftp].se [mirror].us
ATI_Mobius_Screen_Saver.dmg (5.59 MB)
For Mac OS X
#10
[www].se [ftp].se [mirror].us
ATI_Lava_Screen_Saver.dmg (13.80 MB)
For Mac OS X
#11
[www].se [ftp].se [mirror].us
ATI_Gargoyle_Screen_Saver.dmg (9.56 MB)
For Mac OS X
#12
[www].se [ftp].se [mirror].us
ATI_Dogs_Screen_Saver.dmg (8.03 MB)
For Mac OS X
#13
[www].se [ftp].se [mirror].us
ATI_Bacteria_Screen_Saver.dmg (8.93 MB)
For Mac OS X
Emulation
Guides on emulating older applications

The included demo applications highlight features of ATI SmartShader™ technology. The following demos were originally written for DirectX 9 and the Windows operating system and have now been ported to the OpenGL ARB_fragment_program extension and Mac OS X.

SmartShader™ 2.1 is the second generation of cinematic shader technology from ATI, allowing users to experience complex, movie-quality effects in next-generation 3D games and applications. Key features include:
• Full support for programmable vertex and pixel shaders in hardware
• 2.0 Vertex Shaders support vertex programs up to 65,280 instructions with flow control (loops, branches & subroutines)
• 2.0 Pixel Shaders support up to 16 textures per rendering pass with gamma correction
• New F-buffer technology supports fragment shader programs of unlimited length
• High dimension floating point textures
• 128-bit, 64-bit & 32-bit per pixel floating point color formats
• Multiple render targets
• Shadow volume rendering acceleration
• Complete feature set also supported in OpenGL® via extensions

Vertex and Pixel shaders are part of the paradigm shift in graphics technology which allow developers to have unprecedented control of how every pixel on the screen looks. Instead of being limited to the fixed functionality of the hardware, developers can now send small programs to the VPU which completely alter its behavior. With this flexibility, shader capable hardware can provide effects which were either too computationally expensive or impossible to previously perform in real time. The Radeon® X800, 9800, 9700 and 9600 allow the developer to expose dramatic lighting effects, soft shadows, realistic cloth movement, reflective/refractive water with waves, and dynamic environmental effects such as waving grass or even the movement of leaves in a tree. These are just a small subset of what is now possible using the newest ATI SmartShader technology. As developers start taking advantage of shaders, you'll see computer generated imagery rise to the next level of realism.

SmartShader HD Demos (ATI X800 Demos):

The Double Cross:
This demo highlights the abilities of the Radeon X800 to render a cinematic scene with realistic characters.
Through the use of motion captured animation, depth-of-field effects, image based lighting techniques and dynamic shadows, "Ruby: The DoubleCross" borrows heavily from both gaming and movie genres to create a compelling demo that further raises the expectations for real-time graphics.

Depth-of-Field
Depth-of-field is a fundamental aspect of photo realistic rendering. Computer Graphics traditionally uses the pinhole camera model which results in perfectly sharp images. However, real cameras use lenses with variable aperture sizes. This causes out-of-focus objects to appear blurry.

In the Ruby demo, this effect is generated by composing an in-focus (sharp) image and out-of-focus (blurred) image. The normalized camera depth for each pixel is computed and used to influence the composition of the final image.

Hair Rendering
Realistic hair is a key part of creating believable characters. Our approach makes use of the Kajiya-Kay shader model and generates two highlights: specular (shifted towards hair tip) and colored (shifted towards hair root). A sparkle is added to the secondary highlight. Several layers of polygon patches are used to approximate the volumetric qualities of hair, and ambient occlusion is used for self shadowing.

Skin rendering
Skin Rendering is a difficult problem in computer graphics. Most lighting comes from sub-surface scattering within the skin, pigment color comes mainly from the epidermis layer, and the pink/red color is from blood in the dermis.

Texture Space Lighting is the technique used to render realistic skin in the Ruby demo. Diffuse lighting is rendered into an off-screen texture using texture coordinates as the position. A blur filter is then applied to simulate the subsurface scattering effect. A bump map is used when adding the specular lighting in a subsequent pass. For added realism, the specular highlight is darkened in shadowed areas.

Rendering a Diamond
Diamonds in the real-world have complex lighting including reflection, refraction, color shifts, and bright highlights (sparkles). Calculating multiple refractions and reflections is not trivial as it requires performing ray-tracing (this is not feasible on current hardware in real-time). To solve this problem, the Ruby demo uses a fast solution that renders the back face refractions first, and then additively blends on the front face refractions and reflections from an environment cube. Sparkles are added at key points on the gem based on the illumination factor at that point (based on lighting calculations).

Subsurface Scattering:
This demo showcases the power of the Radeon X800 and its ability to render complex lighting interactions.
Precomputed Radiance Transfer (PRT) along with Spherical Harmonic (SH) lighting techniques are used to model complex global illumination including direct and indirect illumination (bounced lighting) as well as soft shadows and subsurface scattering.

Subsurface scattering is a rendering technique where incoming light is allowed to enter the surface of a material, the photons are scattered by the subsurface structure of the material and then exit at various points on the materials surface. This behavior is very recognizable in materials such as marble and jade. These materials allow some incoming light to pass through their surface and thus appear semi-translucent.

A realistic finite lens camera model is used to simulate depth of field. Depth of field is a technique that allows an artist to focus the camera on a particular point in space. Objects in front of or behind the focal point become increasingly blurry while objects at or near the focal point stay in focus. This technique provides important depth and scale cues that make the scene much more compelling.

The demo contains a number of educational split screen modes.

Subsurface scattering vs. Non-subsurface scattering
This mode is used to visualize the difference between materials that exhibit subsurface scattering and materials that do not. The statue is split down the middle: on one side the statue is drawn using a subsurface scattering shader, on the other side the statue is drawn using a non-subsurface scattering shader. The statue with subsurface scattering appears more realistic, semi-translucent and very marble-like while the statue without subsurface scattering appears to be made of hard plastic. Since almost all real world materials exhibit some amount of subsurface scattering this technique is very useful when generating photorealistic images.

Indoor/Outdoor Illumination
This split screen mode is used to visualize the complex illumination techniques being employed by the demo. This demo uses global illumination techniques that combine indirect outdoor illumination with direct and indirect indoor illumination. On the left side of the screen the set is lit using only indirect illumination from the sun. The set elements are mostly dark but some illumination does reach various elements of the scene by way of indirect light bounces. On the right side of the screen the set is illuminated with light sources inside the room. The indoor lights appear brighter because they are more direct and don’t require multiple bounces to reach the viewers eye. In both cases soft shadows are being cast by the various scene elements.

Crowd:
The rendering of large crowd sequences using Artificial Intelligence (AI) software is a technique that has been used very effectively in a number of recent movies and is starting to appear in games.
This demonstration shows the vertex shader processing power of the X800 being used to render a large crowd of soldiers (1400 in total) running across a rocky terrain. All of the models feature weighted skinned vertices and are independently animated. The behavior of the crowd is simulated using A.I. implant.

Additional techniques used in this demo are ambient occlusion (used for shadowing) and glow effects using fragment shaders.

SmartShader 2.0 Demos (ATI 9800 Demos):

Racing Car Paint:
This demo showcases the power of the Radeon® 9800 PRO and OpenGL 2.0 to implement some surface effects not possible on previous generations of hardware.
For example, the two-tone paint shown in the demo mirrors the behavior of real two-tone paint - and is constructed in a similar fashion. A base layer is constructed, followed by a sparkle layer and then finally a gloss layer.

One of the techniques shown is using a normal map to preserve geometric detail while keeping polygon counts low. The original car model had 34,000 polygons. The car model used in the demo uses 2,500 with a high-resolution normal map to preserve the lighting details. Additionally, this demo showcases the high precision normal maps, possible on the Radeon 9800 PRO. This allows for smoothness across the surface without banding artifacts caused by lower precision.

The demo contains a number of educational split screen modes.

Normal Map on vs. Normal Map off
This mode visually shows the effect of the normal map. The left hand side shows what the scene would look like without any normal map and shows what the underlying low-resolution models looks like. As you can see a majority of the lighting detail on the car is stored within the normal map, especially in the hood and tire areas. This technique can give better performance by reducing the size and complexity of the models but keeping the lighting details that make it visually appealing.

8-bit Normal Map vs. 16-bit Normal Map
This shows the advantage of having the higher precision of the 9800. You can see the banding artifacts that occur in places where the normal is slowly changing (like the slow curve on the hood) due to having to quantize these similar normals into only 8bits. With the higher precision texture these artifacts are eliminated.

Paint Buildup
This mode shows the different parts of the shader used to create the paint effect. The top left image shows the two-tone image, where the paint gradually changes from looking orange/red to looking yellow. The top right view shows the sparkles layer which simulates the metal flakes in real paint. The bottom left image shows just the gloss layer. Finally the pieces are put together in the bottom right image showing how they are all combined to give the final look of two-tone car paint.

Multiple Color Schemes
The final screen demonstrates two important things. First it demonstrates that performance isn't limited to just drawing a single car. In fact on this screen the car is drawn eight times: twice for each view, once for the actual car and once for the reflection of the car. In fact since most of the drawing complexity occurs at the pixel shader level, the performance of the shader is tied to how many pixels on the screen use that shader. Secondly it shows the flexible nature of vertex and pixel shaders. Since the colors of the car are simply variables given to the shader it is easy to customize the look without having to change any shader code. This is demonstrated by the different paint jobs applied in each view window all using the same vertex and pixel shader code.

Animusic's Pipe Dream:
Pipe Dream, created by Animusic, was first shown as a non-real-time animation in the Electronic Theatre at Siggraph 2001. With this demonstration we show the raw horsepower of Radeon 9800, showing that last year's offline renderer is this year's real-time demo. The scene consists of over half a million polygons lit using per-pixel lighting techniques; use the W key to check out the wire-frame of the demo to show the scene complexity. Phong shading is used globally to give nice smooth lighting and highlights. Along with the lighting, shadows are another important feature of the demo. Take particular note of the dynamic shadows cast by the balls. Also take note of the environment mapping used for the balls, cymbals, and other metal to give a nice shiny appearance. The other important feature used by this demo is Full Scene Anti-Aliasing, which softens the "jaggies" from all the harsh lines in the scene.
Other interesting shader effects used within the demo include:

Motion blur on the balls
One particularly interesting effect is the motion blur on the balls, which is achieved using a combination of vertex and pixel shaders. The first step, which occurs in the vertex shader, is to elongate the ball in the direction of motion. The second step is to blend this elongated ball with the background based on the intensity of the lighting, so that darker areas appear less solid than lighter areas. This combination of vertex and pixel shaders gives the impression of motion blurring.

Vibrating strings
A similar technique is used for the vibrating strings. The vertex shader is used to pull apart the string using animation data and a pixel shader is used to generate blending values. These two working together give the impression of string vibration.

Glow on the bars
For the glow around the bars a vertex shader is used to generate successive layers of glow. Each blended with the previous version.

Bear:
This demo was created to showcase the application of real-time fur. The original model was taken from a non-real-time short by Axis Animation and converted into a real-time demonstration. The furry parts of the bear are composed of roughly 5,500 polygons, which are rendered eight times at different heights from the original model to simulate the volumetric nature of fur. The eyes, teeth, tongue and shadow geometry consist of another 6,000 polygons.
The education modes show how the fur and lighting are constructed.

Fur Construction
This mode shows the various parts that make up the final visual for the fur rendering. The top left view shows the base layer of the fur. This layer is drawn first and it's the color that will show up if no fur is showing. The top right image shows the effect of the "shells" which are composed as layers of shell textures that simulate fur through a pseudo volumetric rendering technique. Each shell is generated by feeding the base layer geometry to a vertex shader with an offset (height from the base layer), the vertex shader is then used to distort the geometry outward and the shell texture is applied and blended to achieve the volumetric look of fur. The bottom left view shows a technique to further enhance the visual appearance of the silhouette of the fur by generating fins. These fins are placed at the polygon edges and have a texture simulating the fur along the silhouette. The bottom right image shows all of these passes together giving the final fur look.

Lighting Construction
This split screen shows the breakdown of the fur anisotropic lighting into its component ambient, diffuse and specular terms. With the top portion of the screen showing the ambient and diffuse terms, which give the base color, the bottom left showing the specular term, which gives the highlights, and the bottom right showing all the terms combined for the final lighting.

Variations of Fur
This mode shows the power of shader technology; by using parameters into shader code different looks can be achieved by just changing these parameters. In this case the different fur lengths are changed giving the look of shorter or longer fur.

Chimp:
Where the original Bear demo allowed the demonstration of real-time fur on a realistic character, with the introduction of the Radeon 9800, this technology was taken further by rendering more complex fur characters and placing them in a detailed environment.
All of the objects that make up this scene are rendered using the programmable shaders of the Radeon 9800 series. Highlights include:

Realistic Fur on the chimp and butterfly
Rendering realistic fur requires the model to be rendered multiple times in layers to generate the fur effect. Each layer represents a 3D slice through the fur. In addition, fin geometry is added to provide realistic fur on the object silhouettes.

In this demo, the chimp is a 20,000-polygon model and the fur elements are rendered in 8 layers. This results in over 46,000 polygons being rendered every frame just to draw the chimp.

In addition, the lighting and rendering calculations for the fur are performed on a per pixel basis using the programmable pixel shaders.

Rippling Water - with both reflection and refraction
In order to provide realistic water, the scene is rendered twice, once for the main render and once again for the reflection. A programmable vertex shader provides the ripple effect on the water.

Additionally, the differences in how red, green, and blue light refract through water are simulated. Taking a closer look at the water, shows a very subtle rainbow effect that provides more realism.

Moving shadows of the jungle foliage - projected onto the ground
Simulating the leafy canopy of the jungle requires realistic shadows to be cast by the leaves on all objects in the scene. Since the leaves will typically be blowing in the wind, it is not sufficient to bake these shadows into the scene geometry.

The appearance of realistic shadows is created by projecting a leaf shadow texture onto the scene - and by using a vertex shader to randomly move these shadows around - simulating the effect of leaves blowing in the wind. The shaders used here calculate 4 sine waves per vertex to perform this calculation.

Iridescent lighting on the wings of the butterflies
Iridescent lighting was something implemented in subtle places in earlier product demos. The extra capabilities of the latest shader generation now allow the combination of iridescence with gloss maps, transparency maps, and bump maps.

Rendering with Natural Light:
The scene in this demo is a real-time implementation of Paul Debevec's 1998 Siggraph paper "Rendering with Natural Light." The original version of this was rendered offline on a UNIX rendering farm. Each frame took around 20 minutes to render.
The demo is being rendered entirely with image-based lighting - this is a technique for using light captured from the real world to illuminate virtual objects in a virtual scene. In this example, the synthetic objects are illuminated with real light captured in UC Berkeley's eucalyptus grove.

The Radeon® 9700 was the first visual processor with the high range and precision required to implement this technique.

The educational modes in this demo showcase two of the features used in the construction of the final scene. High Dynamic Range Textures and post process Light Glows.

High Dynamic Range OFF vs High Dynamic Range ON
This effect of high dynamic range can be clearly seen in the reflections. Without HDR, the reflections are washed out and lacking in detail. With HDR, the reflections are an accurate representation of the real-world behavior.

Go into interactive mode by moving the mouse and zoom in with the left button. Pick a ball such as the yellow or orange opaque/chrome one and move so that it goes from the bottom left of the screen to the bottom right. Note that on the left half of the screen the tree canopy reflections off of the ball are totally gone. This is due to low-dynamic range math. On the right, these details are preserved. The vignetting (darkening toward corners) also enhances this effect.

Light Blooms ON vs Light Blooms OFF
The light blooms are achieved by post processing the rendered image with a filter to simulate the effect of camera overexposure in extremely bright areas of the screen.

Go into interactive mode by moving the mouse. There is no need to zoom in much. You want to position yourself so that the brightest part of the distant environment is behind the center ball. Move left and right, noting how the bright background bleeds over the foreground objects on the right half of the screen but not on the left half of the screen.

Camera Exposures
This showcases the ability with HDR images to simulate the real-world artifacts introduced by cameras and film. As developers aim to get closer in image quality to the movies, the ability to perform these effects is crucial. Think of this as simulating the camera settings that you are using to take a picture of your virtual world. Because all of the rendering is done in high dynamic range space, you can tweak the camera settings to your heart's content and the image has the precision necessary to give an accurate and beautiful rendering.

SmartShader 2.0 Screensavers:

Mobius Screen Saver:
Conceptually interesting pixel shaders are shown in this screen saver insipired by M. C. Escher's woodcut print titled "Moebius Strip II". A Moebius Strip is a one sided, or single surfaced "object." It is twisted half way around and attached to itself, such that a single path following the surface of the strip will cover its complete area and end back at the start. This way all ants are able to walk an infinite linear path.

Lava Screen Saver:
This screen saver simulates an imaginary journey through underground lava caves and shows a real-time example of image post-processing. These techniques are useful for a variety of solutions, from simulations to cinematic-quality games.

Gargoyle Screen Saver:
Incorporating the Radeon 9800 logo, this saver accurately reproduces the brushed metal and other shaders used in the original. An animated clock, derived from the system time, is displayed behind the gargoyle figure. The Gargoyle Clock demonstrates interesting shading techniques for simulating several different types of metallic surfaces using the Radeon 9800 Pro. Like the other demos presented here, it is running in Apple's OpenGL.

Dogs Screen Saver:
The Dogs (Radeon 9700 Mascott) saver is similar to the Gargoyle Clock in technological composition.

Bacteria Screen Saver:
Inspired by a recent cover on Scientific American, this screen saver simulates the appearance of bacteria when viewed through a microscope. The Bacteria demo is a simulation of depth of field rendering to provide important depth cues and more photo-realistic image synthesis. Depth of field techniques have been used in offline rendering for years but were previously too computationally intense to be performed in real-time. The raw power and advanced shading capabilities of the Radeon 9800 and OpenGL make depth of field rendering possible at interactive frame rates.

The ATI SmartShader 2.0 Screensavers are still missing! If you got them, please do upload them to this page!

Wayback Machine - ATI Developer - Apple SmartShader 2.0 Demo & Screen Savers
Wayback Machine - ATI Radeon SmartShader HD Demos for Macintosh
Wayback Machine - ATI Radeon SmartShader 2.0 Demos for Macintosh
Wayback Machine - ATI Radeon SmartShader 2.0 Screensavers for Macintosh

See also: ATI SmartShaders Demos 1

The 1st download is ATI The Double Cross Demo.
The 2nd download is ATI Subsurface Scattering Demo
The 3rd download is ATI Crowd Demo.
The 4th download is ATI Racing Car Paint Demo.
The 5th download is ATI Animusic's Pipe Dream Demo.
The 6th download is ATI Bear Demo.
The 7th download is ATI Chimp Demo.
The 8th download is ATI Rendering with Natural Light Demo.
The 9th download is ATI Mobius Screensaver.
The 10th download is ATI Lava Screensaver.
The 11th download is ATI Gargoyle Screensaver.
The 12th download is ATI Dogs Screensaver.
The 13th download is ATI Bacteria Screensaver.

Compatibility
Architecture: PPC

SmartShader 2.0 Screensavers:
PowerPC processor
Mac OS X 10.3.8 and later
512 MB memory
ATI video card with 128 MB VRAM or higher
(Radeon 9600 XT, Mobility Radeon 9700, Radeon 9800 Pro/XT, Radeon X800 XT supported)
screen resolution 1024x768 or higher

SmartShader 2.0 Demos (ATI 9800 Demos):
PowerPC processor
Mac OS X 10.3.8 and later
512 MB memory
ATI video card with 128 MB VRAM or higher
(Radeon 9600 XT, Mobility Radeon 9700, Radeon 9800 Pro/XT, Radeon X800 XT supported)
screen resolution 1024x768 or higher

SmartShader HD Demos (ATI X800 Demos):
PowerPC processor
Mac OS X 10.4.3 and later
512 MB memory
ATI video card with 256 MB VRAM or higher
(Radeon X800 XT Mac Edition 256, Radeon X850 XT Mac Edition supported)
screen resolution 1024x768 or higher

The demos will all run by default at a resolution of 1024x768.

Comments

by dr.zeissler - 2017, January 10 - 5:50am
5

Thx, so what about X1600 of the iMac 17". It's an Intel machine, but the GPU should do all Demos because it has SM3.0

by Kitchen2010 - 2016, December 4 - 9:21am

These demos might also work on newer graphics card, if your graphics card supports the required functions (even on Nvidia graphics boards, perhaps?) and you can get PPC binaries running on your computer.

MacTouch's picture
by MacTouch - 2016, December 2 - 9:48pm

If you're talking about the MacMini Mid-2011 with Radeon HD 6630M, the answer is "incompatible". This machine came with Mac OS X Lion (10.7) & cannot run PPC application. Also, theses softwares was made for old ATI graphics cards...

by dr.zeissler - 2016, December 1 - 6:55pm
5

Has anybody tested them on a later MacMini with AMD/ATI 6630 ?
Would be interesting if they run on that nice machine.