Previous to the semester 1 presentation, I had begun a few tests with the Mash audio node, linked to the scale attributes of some basic textures mapped to spheres. Subsequently, here is a short animatic produced to give a very rough idea of where the project might go by displaying the graphics linked to the audio node. (Unfortunately, there is no sound with this video.)
Mash – A Few Notes on the Audio Node
- Spectrum mode
- average node (can turn off In Time attribute – this becomes 1 shape moved by the amplitude
- frequency graph – use the graph to isolate a particular frequency
- use the threshold to set a gate for when you want the signal to trigger
- Node editor – can output (out volume) volume or individual frequency bands to anything (as volumes). For example, the Y position of an object or the emission rate of particles etc
- Advanced options – turn on output frequency attribute
- (Node editor – in/out connections)
- To smooth out the jerkiness of the audio, add a spring node, increase the dampening and stiffness (0.4?)
- You can affect the ID channel – for example, the louder it gets, you can change the colour – I need to follow this up with a test!
In addition, I had begun some displacement map tests on simple spheres to experiment with the possibility of using textures as the basis for the modal cymatic shapes. The main problem with using this technique is the lack of a seamless way to map a sphere, so the displacement is not truly spherical in nature. Therefore, I may use a combination of projected maps for areas of the structures by projecting onto faceted objects, depending on the particular structure, but it seems that the central core of the shapes will have to be modelled. Here are a selection of the tests:
The above images were created by displacing projected cymatic images on a sphere, while the following ones used: a series of layered simple planes with different amounts of displaced height on each layer; and 2 planes reflected on the z axis to create the illusion of symmetry – I also added a black and white matte to remove the extra specular color, plugged in as spec color, turned down the color gain to black (both also used transparency mapping):
Modelled Curves Test: This test involved the rough modelling of a cymatic shape using CV curves as the basis for the structure and using an existing image as reference. A glow was added for effect. Ideally the shape would pulse in and out with the music and change depending which frequency triggers a particular structure. The 3D nature of the shape has been interpreted via the amount of light on perceived sections from the original image.
Rendering Curves
Here’s a useful reminder how to do this:
- draw the curve
- rendering menu
- paint effects
- curve utilities
- attach brush to curves – this creates a stroke in the outliner
- attributes – creates brush
- brush profile
- brush width
- shading – color
- modify – convert paint effects to polygons (if you want to render in mental ray etc.)
- NB – watch when converting to polys, you may need to add detail depending on the number of faces created during conversion
- paint brush profile – width/flatness
- remember – you can only render brushes in Maya software!
Displacement Mapping – digital tutors
In order to understand displacement maps more fully, I completed a short digital tutors course, the shorthand details of which are as follows:
Lesson 2 – Setting Up
- Blinn shader
- textures tab in hypershade – MMB drag in displacement
- bump maps are essentially a fake displacement map, as they don’t change the geometry
- RMB drag – connect as displacement map (good for maya materials)
- For mental ray – select shader, follow output arrow in attributes- MMB drag displacement map onto ‘displacement mat’ name
- [NB – even though cube_mat is not connected to displacement map it still works]
- Maya software – to change the accuracy of disp map – cube attr – displace_CubeShape – disp map – feature disp (on!) – initial sample rate 20? Better!
- The white areas of the map are pulled out!
Lesson 3 – Controlling Height
- Hypershade – inputs – disp map – attr – color balance – alpha gain (alpha luminance is on!)
- Disp bounding box – how far out?
- select the geometry – disp.cubeShape – disp map – bounding box scale – default is 1.5 approx, up the scale if needed!
- Maya will automatically do this if you click the ‘calculate bounding box scale’ button!
- Maya 2012 or higher – displ shader – attributes – scale (Mental Ray)
Lesson 4 – Approximations
In mental ray – feature settings don’t work! Except for the bounding box scale settings. So, in MR we use approximations instead.
- window – rendering editors – mental ray – approximation editor
- looking at disp tesselation and subdivisions (create button)
- displacement tesselation – keeps the shape of the box (or your model) and displaces the map
- subdivisions – rounds out the shape
- Don’t use both!
- If you’re using a disp map on an organic model (e.g. a head), use subdivision approximation to smooth out the sharp edges
Lesson 5 – Approximation Settings
- apply disp. approx – attribute editor
- parametric (approx method) – slow!
- there are different methods and styles available, however, you probably don’t need to use most of them!
- Spatial – the one and only method recommended!
- adaptive subdivs
- need to define the length (starrt with 0.1)
- make it lower – more detail e.g. 0.01
- however, this works in tandem with the Max subdivisions – if the max subdivs are reached before the smallest length, it won’t make any difference to the render
- so, if you want more detail, up the max subdivs!
- the ‘view dependent’ checkbox – changes the value of length to mean the size of a rendered pixel. So, if the length is 1.0, then the smallest triangle size would be no bigger than 1 pixel
- Benefits – if your object is far away or is a static scene – so don’t use for animations!
- ‘sharp’ – this sharpens up the render!
- approx style – ‘fine’ is good for me!
- You can unassign the approx settings to try another version, then reassign to suit as the original is still there.
Lesson 6 – Subdivision Approximations
- If using EXR’s, check the plug in manager for TiffFloatReader and OpenEXR
- similar to disp approx
- ** you can plug in a disp map into a material as a bump map for finer detail – dial it down though!
Lesson 7 – Avoid Common Issues
- use 32 bit disp maps whenever possible!
- do not rescale geometry (displaced) – you need to rescale the disp map ‘scale’ attribute
- make sure your disp maps are in RGB not grayscale
- if using a linear workflow, make sure it stays in the linear workspace!
- (disp map – linear sRGB)
Lesson 8 – Rendering with 16 bit maps
If you’re using 16 bit maps, you can up the alpha gain but it bloats the image. Changing the alpha offset to negative one half of the alpha gain. For example:
- AGain = 10, Offset = -5
- AGain = 30, Offset = -15
‘Stair step’ gradations are just a sign of not enough information in the image. So, use 32 bit images!
Lesson 9 – Rendering Vector Displacement Maps
This is specific to Mudbox – see the ‘spiky head’ example on digital tutors for more information!
Transparency Maps
You can use as either transparency maps or as alpha channel in the color channel (e.g. tga file).
- Blinn materials have specular and reflections
- need to use a second map – matte!
- black – no specular or reflection
- plug into specular color – but you get noise! And, more spec on the main image!
- you need to tone down the white area of the matte – attributes – color balance – color gain – turn it down!
- still have noise and a faint shadow!
- bump map problems – bump 3D node
- bump filter – turn down to 0
- for the unwanted shadow – select the material – raytrace options – shadow attenuation – 0
Introduction to 3D Camera Tracking in Nuke
Here, again in short hand version, are the notes from the digital tutors lessons for 3D tracking in Nuke.
Lesson2 – Parralax in scene – create 1st track
The first thing was to load in the files and drop in a 3D camera tracker.
- the camera tracker has a mask and a source pipe – connect the footage to the source
- you can set analysis range if you want to set it to specific frames
- track features button
- solve camera button (this could take some time to go through the frames)
- then create scene button (see picture above) – you get 3 nodes: camera tracker point cloud (data points in 3D – can press viewer to enter 3D space); camera node (output node from tracker); scene node
Lesson 3 – viewing track and point quality and rendering point cloud
- you can switch between views by using the 3D and 2D drop down buttons from the top
- red track points are not used, rejected!
- green track points – when you move the mouse over one, the reprojection error should be as low as possible
- if you have the appropriate version of Nuke X, change the display of tracker to ‘point quality’ – it gives a colour coded version – green – yellow – red (good – bad)
- drop in a scanline render node (3D menu) to render the 3D scene
- has 3 pipes – scene/camera/background
- 2D tab (with scanline selected) shows the tracks over the footage
- ‘o’ turns the overlay on or off
- some track points are not used, so turn on ‘ket tracks’ in camera tracker to see the keys used
- fades out other track points with overlay on
Lesson 4 – setting correct scan size and axis
- drop in a 3D cube to the scene, connect it to the scene
- select scanline render – hit tab key to switch between windows
- zoom out – cube is too large!
- go to camera tracker – scene tab
- uniform scale – change up to approx 10
- we need to set up the axis/axes – select camera tracker
- go into 2D visible scene
- select a frame where you can still see suitable track points
- select (shift) points for the y axis
- RMB – ground plane – set Y
- if you could ‘see the ground’, you could set ground plane
- do the same for the x axis carefully!
- this will reorient your scene
- NB – set the scale first, then your axis before adding 3D objects!
Lesson 5 – adding a nuke 3D object
- camera tracker (2D)
- select a point (green if you have access to point quality settings)
- RMB – create – cylinder
- connect it to scene node
- view in scanline render
- it’s difficult to see a black object!
- so, drop in a checkerboard from the create menu
- connect to cylinder and cube
- select to move (CTRL to rotate)
- tab to switch to 3D view (easier to rotate and move in this view!)
- scale the cylinder and place on the back of the couch
- to switch off the tracker points when viewing – camertrackerpointcloud – render – off!
Lesson 6 – copy transform to place external 3D object
- 3D geometry – readGeo node
- add in an obj file – connect to scene
- now select a point where you want it to go
- RMB – copy translate
- add in (3D – modify – ) transformGeo node between redgeo and scene nodes
- translate attribute (squiggle button) – copy – paste – OR – paste relative if you don’t have this option!
- so, transform geo change uniform scale to suit the scene
- (copy translate will copy the location of original pivot of the object)
Lesson 7 – tweaking feature and tracker settings to track other shots
In this example, we’re using the candy machine sequence.
- drop in camera tracker node – tracking tab
- turn on ‘preview features’
- can turn fstops above the viewer to see them more clearly
- can double up no of features to 300 to add more points (if you wanted to back wall etc), but it takes twice as long
- try 200 (Nuke needs at least 100/150)
- detection threshold – reduce to 0 (move points)
- feature separation – 20 (spreads points)
- could be good and bad!
- just make sure it’s between 8 and 20
- so, camera tracker tab – track features – solve camera
- [tracking tab – track validation – if you choose ‘none’ it will keep all track info, but default settings should be fine! (for difficult tracks). Threshold – decrease value, more tracks but poorer quality]
Lesson 8 – using masks to remove non-trackable features
We’re using the red truck sequence here.
- we don’t need to track the sky, it’s wasteful!
- preview features (some on the sky)
- mask port!
- first, you need to go to project settings ‘S’ to set the correct footage size (960 x 540 here)
- drop in rotopaint node ‘P’
- connect mask from tracker to it
- use bezier to roughly draw the shape of the sky (transform tab to move it)
- move mask to fit with moving scene and set keys in the process
- camera tracker – mask – mask alpha to remove unnecessary track points
- let’s the other points create a better track! This technique is good to mask out people etc.
Lesson 9 – masks to focus on a specific area
- focus track on the truck – rotopaint the truck
- set from global to input (left of timeline) – not sure why though!
- set mask – inverted alpha
- if changing mask shape, turn on/off preview features then turn back on!
- Remember – you’ll need close and far away objects – probably won’t use this often!
Lesson 10 – Increasing DOF blur with the depth generator
- drop in depth generator node – (2 pipes – camera and source)
- you now have access to a depth map in the channel drop down (above viewer)
- can use this for various things (atmospheric effects)
- but we’ll add a z-blur node (ZDefocus)
- connect image pipe to depth generator
- output – focal plane set up
- move focus plane (red – behind camera, blue – in front of camera, green – in focus!)
- increase DOF for more green in focus!
- can then move focus plane again to decide what stays in focus
- change channel back to RGBA
- ZDefocus output – result
- change size to 100 just to see result! (it’s too much)
- swap viewer between depth generator and ZDefocus to see!
- still a few problems though!
- need to tweak the generator!
Lesson 11 – getting smoother depth (depth generator)
- check depth channel is only affecting the RGBA channel in zdefocus
- so, depth generator node:
- frame separation (bidirectional) checks the frame before and after for movement. If the problem frames are caused by low value?
- try a value of 5 – blurrier result!
- so, if we’re on frame 6, it samples frames 1, 6 and 11 instead of 5, 6, and 7 from before
- so, a value of between 1 and 5 is best – try 3!
- depth detail – 0.5 every other frame
- set to 1 for better detail
- if you’re going to render out the depth sequence you can set smoothness to 0 (quicker) and blur it later. But otherwise set it to a value of above 0, e.g. 0.1 – 0.5
- haloing is a problem! – up detail to 0.8, frame separation to 5
- DOF wouldn’t be 100 however! 10 is probably more appropriate!
- occlusion – ‘normal’ is fine
Lesson 12 – converting a 3D position into a 2D position with reconcile 3D
- Using the first scene again – add a flare after the scanline
- preset – glowballs
- add reconcile 3D (transform tools)
- join camera pipe to camera
- if your project settings are correct, you don’t need to plug in IMG to the image sequence
- camera tracker – RMB over the detector in the corner – copy – translate
- reconcile node – input 3D point ~ – paste relative (!?) – (different from copy paste!)
- create keyframes 0-25
- open flare – open reconcile 3D
- CTRL + click + drag values from reconcile xy output into flare position x and y