• About
  • Digital Art
  • Ox

J.J. McGowan

~ Interactive Media, Modelling, Dynamics, Compositing, 3D Generalist

J.J. McGowan

Monthly Archives: December 2013

Week 14, Christmas Break – Displacement Maps, Mash Tests, 3D Tracking in Nuke

27 Friday Dec 2013

Posted by J.J. McGowan in Uncategorized

≈ Leave a comment

Previous to the semester 1 presentation, I had begun a few tests with the Mash audio node, linked to the scale attributes of some basic textures mapped to spheres. Subsequently, here is a short animatic produced to give a very rough idea of where the project might go by displaying the graphics linked to the audio node. (Unfortunately, there is no sound with this video.)

Mash – A Few Notes on the Audio Node

  • Spectrum mode
  • average node (can turn off In Time attribute – this becomes 1 shape moved by the amplitude
  • frequency graph – use the graph to isolate a particular frequency
  • use the threshold to set a gate for when you want the signal to trigger
  • Node editor – can output (out volume) volume or individual frequency bands to anything (as volumes). For example, the Y position of an object or the emission rate of particles etc
  • Advanced options – turn on output frequency attribute
  • (Node editor – in/out connections)
  • To smooth out the jerkiness of the audio, add a spring node, increase the dampening and stiffness (0.4?)
  • You can affect the ID channel – for example, the louder it gets, you can change the colour – I need to follow this up with a test!

In addition, I had begun some displacement map tests on simple spheres to experiment with the possibility of using textures as the basis for the modal cymatic shapes. The main problem with using this technique is the lack of a seamless way to map a sphere, so the displacement is not truly spherical in nature. Therefore, I may use a combination of projected maps for areas of the structures by projecting onto faceted objects, depending on the particular structure, but it seems that the central core of the shapes will have to be modelled. Here are a selection of the tests:

test0 test5

The above images were created by displacing projected cymatic images on a sphere, while the following ones used: a series of layered simple planes with different amounts of displaced height on each layer; and 2 planes reflected on the z axis to create the illusion of symmetry – I also added a black and white matte to remove the extra specular color, plugged in as spec color, turned down the color gain to black (both also used transparency mapping):

separated disp planes mirrored planes displaced

Modelled Curves Test: This test involved the rough modelling of a cymatic shape using CV curves as the basis for the structure and using an existing image as reference. A glow was added for effect. Ideally the shape would pulse in and out with the music and change depending which frequency triggers a particular structure. The 3D nature of the shape has been interpreted via the amount of light on perceived sections from the original image.

Rendering Curves

Here’s a useful reminder how to do this:

  • draw the curve
  • rendering menu
  • paint effects
  • curve utilities
  • attach brush to curves – this creates a stroke in the outliner
  • attributes – creates brush
  • brush profile
  • brush width
  • shading – color
  • modify – convert paint effects to polygons (if you want to render in mental ray etc.)
  • NB – watch when converting to polys, you may need to add detail depending on the number of faces created during conversion
  • paint brush profile – width/flatness
  • remember – you can only render brushes in Maya software!

Displacement Mapping – digital tutors

In order to understand displacement maps more fully, I completed a short digital tutors course, the shorthand details of which are as follows:

Lesson 2 – Setting Up

  • Blinn shader
  • textures tab in hypershade – MMB drag in displacement
  • bump maps are essentially a fake displacement map, as they don’t change the geometry
  • RMB drag – connect as displacement map (good for maya materials)
  • For mental ray – select shader, follow output arrow in attributes- MMB drag displacement map onto ‘displacement mat’ name
  • [NB – even though cube_mat is not connected to displacement map it still works]
  • Maya software – to change the accuracy of disp map – cube attr – displace_CubeShape – disp map – feature disp (on!) – initial sample rate 20? Better!
  • The white areas of the map are pulled out!

Lesson 3 – Controlling Height

  • Hypershade – inputs – disp map – attr – color balance – alpha gain (alpha luminance is on!)
  • Disp bounding box – how far out?
  • select the geometry – disp.cubeShape – disp map – bounding box scale – default is 1.5 approx, up the scale if needed!
  • Maya will automatically do this if you click the ‘calculate bounding box scale’ button!
  • Maya 2012 or higher – displ shader – attributes – scale (Mental Ray)

Lesson 4 – Approximations

In mental ray – feature settings don’t work! Except for the bounding box scale settings. So, in MR we use approximations instead.

  • window – rendering editors – mental ray – approximation editor
  • looking at disp tesselation and subdivisions (create button)
  • displacement tesselation – keeps the shape of the box (or your model) and displaces the map
  • subdivisions – rounds out the shape
  • Don’t use both!
  • If you’re using a disp map on an organic model (e.g. a head), use subdivision approximation to smooth out the sharp edges

Lesson 5 – Approximation Settings

SPATIAL APPROXIMATION SETTINGS

  • apply disp. approx – attribute editor
  • parametric (approx method) – slow!
  • there are different methods and styles available, however, you probably don’t need to use most of them!
  • Spatial – the one and only method recommended!
  • adaptive subdivs
  • need to define the length (starrt with 0.1)
  • make it lower – more detail e.g. 0.01
  • however, this works in tandem with the Max subdivisions – if the max subdivs are reached before the smallest length, it won’t make any difference to the render
  • so, if you want more detail, up the max subdivs!
  • the ‘view dependent’ checkbox – changes the value of length to mean the size of a rendered pixel. So, if the length is 1.0, then the smallest triangle size would be no bigger than 1 pixel
  • Benefits – if your object is far away or is a static scene – so don’t use for animations!
  • ‘sharp’ – this sharpens up the render!
  • approx style – ‘fine’ is good for me!
  • You can unassign the approx settings to try another version, then reassign to suit as the original is still there.

Lesson 6 – Subdivision Approximations

  • If using EXR’s, check the plug in manager for TiffFloatReader and OpenEXR
  • similar to disp approx
  • ** you can plug in a disp map into a material as a bump map for finer detail – dial it down though!

DISPLACEMENT AS BUMP

Lesson 7 – Avoid Common Issues

  • use 32 bit disp maps whenever possible!
  • do not rescale geometry (displaced) – you need to rescale the disp map ‘scale’ attribute
  • make sure your disp maps are in RGB not grayscale
  • if using a linear workflow, make sure it stays in the linear workspace!
  • (disp map – linear sRGB)

Lesson 8 – Rendering with 16 bit maps

If you’re using 16 bit maps, you can up the alpha gain but it bloats the image. Changing the alpha offset to negative one half of the alpha gain. For example:

  • AGain = 10, Offset = -5
  • AGain = 30, Offset = -15

‘Stair step’ gradations are just a sign of not enough information in the image. So, use 32 bit images!

Lesson 9 – Rendering Vector Displacement Maps

This is specific to Mudbox – see the ‘spiky head’ example on digital tutors for more information!

Transparency Maps

You can use as either transparency maps or as alpha channel in the color channel (e.g. tga file).

  • Blinn materials have specular and reflections
  • need to use a second  map – matte!
  • black – no specular or reflection
  • plug into specular color – but you get noise! And, more spec on the main image!
  • you need to tone down the white area of the matte – attributes – color balance – color gain – turn it down!
  • still have noise and a faint shadow!
  • bump map problems – bump 3D node
  • bump filter – turn down to 0
  • for the unwanted shadow – select the material – raytrace options – shadow attenuation – 0

Introduction to 3D Camera Tracking in Nuke

Here, again in short hand version, are the notes from the digital tutors lessons for 3D tracking in Nuke.

Lesson2 – Parralax in scene – create 1st track

The first thing was to load in the files and drop in a 3D camera tracker.

add tracker and create scene

  • the camera tracker has a mask and a source pipe – connect the footage to the source
  • you can set analysis range if you want to set it to specific frames
  • track features button
  • solve camera button (this could take some time to go through the frames)
  • then create scene button (see picture above) – you get 3 nodes: camera tracker point cloud (data points in 3D – can press viewer to enter 3D space); camera node (output node from tracker); scene node

Lesson 3 – viewing track and point quality and rendering point cloud

  • you can switch between views by using the 3D and 2D drop down buttons from the top
  • red track points are not used, rejected!
  • green track points – when you move the mouse over one, the reprojection error should be as low as possible

point cloud in 3d

  • if you have the appropriate version of Nuke X, change the display of tracker to ‘point quality’ – it gives a colour coded version – green – yellow – red (good – bad)
  • drop in a scanline render node (3D menu) to render the 3D scene

scanline render node

  • has 3 pipes – scene/camera/background
  • 2D tab (with scanline selected) shows the tracks over the footage
  • ‘o’ turns the overlay on or off
  • some track points are not used, so turn on ‘ket tracks’ in camera tracker to see the keys used
  • fades out other track points with overlay on

Lesson 4 – setting correct scan size and axis

set axis

  • drop in a 3D cube to the scene, connect it to the scene
  • select scanline render – hit tab key to switch between windows
  • zoom out – cube is too large!
  • go to camera tracker – scene tab
  • uniform scale – change up to approx 10
  • we need to set up the axis/axes – select camera tracker
  • go into 2D visible scene
  • select a frame where you can still see suitable track points
  • select (shift) points for the y axis
  • RMB – ground plane – set Y
  • if you could ‘see the ground’, you could set ground plane
  • do the same for the x axis carefully!
  • this will reorient your scene
  • NB – set the scale first, then your axis before adding 3D objects!

rescale and set axis

Lesson 5 – adding a nuke 3D object

  • camera tracker (2D)
  • select a point (green if you have access to point quality settings)
  • RMB – create – cylinder
  • connect it to scene node
  • view in scanline render
  • it’s difficult to see a black object!
  • so, drop in a checkerboard from the create menu
  • connect to cylinder and cube
  • select to move (CTRL to rotate)
  • tab to switch to 3D view (easier to rotate and move in this view!)
  • scale the cylinder and place on the back of the couch
  • to switch off the tracker points when viewing – camertrackerpointcloud – render – off!

add nuke 3D object point cloud render off

Lesson 6 – copy transform to place external 3D object

  • 3D geometry – readGeo node
  • add in an obj file – connect to scene
  • now select a point where you want it to go
  • RMB – copy translate
  • add in (3D – modify – ) transformGeo node between redgeo and scene nodes
  • translate attribute (squiggle button) – copy – paste – OR – paste relative if you don’t have this option!
  • so, transform geo change uniform scale to suit the scene
  • (copy translate will copy the location of original pivot of the object)

transform geo

Lesson 7 – tweaking feature and tracker settings to track other shots

In this example, we’re using the candy machine sequence.

  • drop in camera tracker node – tracking tab
  • turn on ‘preview features’
  • can turn fstops above the viewer to see them more clearly
  • can double up no of features to 300 to add more points (if you wanted to back wall etc), but it takes twice as long
  • try 200 (Nuke needs at least 100/150)
  • detection threshold – reduce to 0 (move points)
  • feature separation – 20 (spreads points)
  • could be good and bad!
  • just make sure it’s between 8 and 20
  • so, camera tracker tab – track features – solve camera
  • [tracking tab – track validation – if you choose ‘none’ it will keep all track info, but default settings should be fine! (for difficult tracks). Threshold – decrease value, more tracks but poorer quality]

Lesson 8 – using masks to remove non-trackable features

We’re using the red truck sequence here.

  • we don’t need to track the sky, it’s wasteful!
  • preview features (some on the sky)
  • mask port!
  • first, you need to go to project settings ‘S’ to set the correct footage size (960 x 540 here)
  • drop in rotopaint node ‘P’
  • connect mask from tracker to it
  • use bezier to roughly draw the shape of the sky (transform tab to move it)
  • move mask to fit with moving scene and set keys in the process
  • camera tracker – mask – mask alpha to remove unnecessary track points
  • let’s the other points create a better track! This technique is good to mask out people etc.

Lesson 9 – masks to focus on a specific area

  • focus track on the truck – rotopaint the truck
  • set from global to input (left of timeline) – not sure why though!
  • set mask – inverted alpha
  • if changing mask shape, turn on/off preview features then turn back on!
  • Remember – you’ll need close and far away objects – probably won’t use this often!

Lesson 10 – Increasing DOF blur with the depth generator

dof with depth generator

  • drop in depth generator node – (2 pipes – camera and source)
  • you now have access to a depth map in the channel drop down (above viewer)
  • can use this for various things (atmospheric effects)
  • but we’ll add a z-blur node (ZDefocus)
  • connect image pipe to depth generator
  • output – focal plane set up
  • move focus plane (red – behind camera, blue – in front of camera, green – in focus!)
  • increase DOF for more green in focus!
  • can then move focus plane again to decide what stays in focus
  • change channel back to RGBA
  • ZDefocus output – result
  • change size to 100 just to see result! (it’s too much)
  • swap viewer between depth generator and ZDefocus to see!
  • still a few problems though!
  • need to tweak the generator!

Lesson 11 – getting smoother depth (depth generator)

depth detail better

  • check depth channel is only affecting the RGBA channel in zdefocus
  • so, depth generator node:
  • frame separation (bidirectional) checks the frame before and after for movement. If the problem frames are caused by low value?
  • try a value of 5 – blurrier result!
  • so, if we’re on frame 6, it samples frames 1, 6 and 11 instead of 5, 6, and 7 from before
  • so, a value of between 1 and 5 is best – try 3!
  • depth detail – 0.5 every other frame
  • set to 1 for better detail
  • if you’re going to render out the depth sequence you can set smoothness to 0 (quicker) and blur it later. But otherwise set it to a value of above 0, e.g. 0.1 – 0.5
  • haloing is a problem! – up detail to 0.8, frame separation to 5
  • DOF wouldn’t be 100 however! 10 is probably more appropriate!
  • occlusion – ‘normal’ is fine

Lesson 12 – converting a 3D position into a 2D position with reconcile 3D

reconcile 3d create keyframes

  • Using the first scene again – add a flare after the scanline
  • preset – glowballs
  • add reconcile 3D (transform tools)
  • join camera pipe to camera
  • if your project settings are correct, you don’t need to plug in IMG to the image sequence
  • camera tracker – RMB over the detector in the corner – copy – translate
  • reconcile node – input 3D point ~ – paste relative (!?) – (different from copy paste!)
  • create keyframes 0-25
  • open flare – open reconcile 3D
  • CTRL + click + drag values from reconcile xy output into flare position x and y

reconcile 3d copy position data

Week 13 – Assignment Hand-In’s, Mash Introduction, Render and Nuke Problems

04 Wednesday Dec 2013

Posted by J.J. McGowan in Uncategorized

≈ Leave a comment

This week was focussed on handing in the final research poster, the modelling assignments and the finished blog to date:

The Finalised Research Poster

John McGowan Research Poster

I made only a couple of changes to last version, mainly aesthetic, while I removed any hyphenated phrases spread out over 2 lines as this looked unprofessional. The presentation will take place tomorrow, wednesday the 4th december.

Polygonal Modelling Assignment

The boot itself was finished a couple of weeks agao, but here are some of the main steps in the post modelling stage:

UVLayout

The first was the exportation of the various segments of the model into UVLayout. Here shows the main upper of the boot with a preset checkered pattern which shows the red sections where the UV’s are slightly stretched and blue sections where they are compressed – of an acceptable level!

UVs

Following that, the .obj’s were imported back into Maya, where I took a snapshot of the UV’s before using that template in photoshop to arrange the textures. I took a variety of photos of the boot itself, layered the textures and used painting and opacity techniques to hide the seams and dull down some of the highlights, as it was a particularly shiny leather/PVC texture. The stitching and the zip were the most awkward part – I had to use a variety of options including warping the textures and moving the UV’s to fit as accurately as I could.

outUV_Upper

A similar approach was used for the bump and specular maps for the boot – I desaturated the images, used the levels to darken and lighten the blacks and whites, and painted in or out sections of the bump I required. Here is the bump map for the sole of the boot:

sole_UV_bump

I used a Blinn Shader for the upper sections of the boot for access to a specular map, while the sole remained a Lambert shader.

materials

One textured, I set up the lights and materials for a linear workflow. Here shows the keylight with the override on the color channel:

linear lights

After copying the original boot to make a pair, moving the laces on the copied boot to let gravity seem more realistic, setting up appropriate lighting and cameras, the next stage involved rendering out the essential passes for compositing in Nuke.

boots

Introduction to Mash

In addition to the due assignments this week, I had begun looking at a Maya plug-in that may be useful for my visualisation research. It is a procedural animation toolkit that offers a selection of effector nodes which can be daisy chained together to generate a wide variety of customisable effects. It’s fully controllable from both Maya’s Attribute and Node Editors. Crucially, it has an audio node that will allow control of preset models or other nodes that will synchronise with a soundtrack.

Mash Audio Node

This image shows how an array of objects can be created from a single object (in this case a simple sphere) using the distribution node, then by adding an audio node you can assign individual eq output nodes to any other node in the scene. So, by focussing on a certain frequency, you can affect a node outside of Mash entirely or another one within it, which opens up a number of possibilities. In the screenshot below, one of the eq outputs is connected to the scale attributes of a simple cube, the effect being that the cube pulses in and out in all directions in perfect time with one frequency on the audio spectrum. More tests to follow.

Mash Audio Node2

Compositing the Polygonal Renders

Problems:

I understand the idea of render layers and render passes in Maya. However, despite my repeated attempts (including a variety of digital tutors tutorials) to render out isolated shadows in an appropriate and useable manner for Nuke, it still eludes me. I have rendered out the shadows as a pass, as seen in the alpha channel, but the entire shoe has a variety of shadows all over it which I don’t want, as I just want a shadow that will merge under the diffuse passes. I tried an alternative, whereby I added a background shader to the boots on one render layer, and it did indeed render out just the shadows under the boots. Again, however, I could not find a suitable way to merge the shadows in. I inverted the shadow and added a grade node, but there still remained a subtle white outline around the shoe as a result.

shadow problem

Perhaps using the unpremult and multiply nodes would have rectified this. Overall though, I know that the workflow is not correct, even if it does produce an image that is reasonably good. My knowledge of both rendering and compositing needs additional input, from both personal study and from the university. I feel that the compositing lectures that we had were delivered too quickly to be able to take notes, understand what was going on, and try it myself at the same time without missing vital elements, which I feel is what has happened (for example, how to render and use matte, depth and so on). For a couple of the lectures we have been sent follow up notes, but these are on a high level basis that don’t deal with the process on a step by step basis. If we had access to a project from beginning to end in terms of what and how the passes are rendered in Maya, to the node tree used in Nuke (including what gets shuffled out etc), where we could study and undertstand the process, this would be appreciated. So, unfortunately I feel frustrated by the lack of student/teacher time in this area in the first semester, especially as we will be heading into ‘going live’ with all 11 students having a different idea on how to render and composite. Having spoken to my fellow students, they all feel very similar with respect to the fact that there needs to be some consistency and a specific framework from which we all should follow. Hopefully semester 2 will rectify this.

So, regardless of the issues raised above, here is the final workflow that I used in Nuke. Regardless of my attempts to render a depth pass, I could not find the appropriate rendered image to include a ZDefocus node and I felt that a lens distortion node was unnecessary:

Nuke boot comp front

Finally, here are the final composited renders of the boots in png format along with the original reference images and occlusion pass:

boots_back_view boots_front_view

boot_inside boot_outside copy

JJMcG_Boots_Occlusion_Pass

Additionally, for ease of acccess, here is the NURBS render again alongside the original reference image and Maya screenshot:

Comp_Render

NURBS Room Shot small

NURBS Room v3

And a link to the NURBS camera movement again:

Recent Posts

  • The PhD!
  • Week 46 – The Last Post! Hand-In Week, Presentation, Degree Show Preparation, Poster
  • Week 45 – 1 Week to Go! Cleaning Up Comps, Tweaking the Edit, Additional Sound, Peer Review, Presentation
  • Week 44 – 2 Weeks to Go! Final Composites, Editing, Tidying Up, Titles and Credits
  • Week 43 – 3 Weeks to Go! Nuke – problem shots, reflections, depth, flares, lighting, lightwrap

Recent Comments

Rotoscopy on Week 10 – Compositing, N…
jzmcgowan on Week 32 – Advanced Produ…

Archives

  • September 2016
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013

Categories

  • Uncategorized

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • J.J. McGowan
    • Already have a WordPress.com account? Log in now.
    • J.J. McGowan
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...