Monday 11th November

This morning saw Jin deliver a lecture on film compositing. He describes VFX as a combination of science and art, and there are 4 main technologies:

  1. Motion graphics – this is design based. For example, titles sequences, editing, music etc…
  2. 3D animation
  3. Special Effects – practical effects (make up, fire, smoke, rain etc)
  4. VFX – something that can’t be accomplished during shooting; more cost effective; there may be danger involved in shooting the real shot, etc…

The idea for VFX is realism – VFX are not design! Keep them as natural as possible! Recommended books include:

  • The art and science of digital compositing
  • Digital compositing for film and video

In described how at MCP in London, the FX team focus on 2D/3D animation, while the VFX pipeline includes the whole workflow. Some of the software and file formats they use include:

  • 64 bit Linux; 10 bit DPX; 16 or 32 bit EXR
  • Maya, MPC Scanline Floorline (dynamics), XSI, Mari, Zbrush, Nuke and Renderman.

When it comes to colour in compositing, don’t always believe your eyes! You need to use tools to analyse the data – for example, an 8 bit monitor won’t show all the colour for a 16 bit image!

  • 8 bit – RGB – 256 steps per channel
  • 16 bit – 65,536 steps per channel

Importantly, values are above 1 or below 0, for example, HDRI or EXR’s.

The Alpha Channel – Channel A – Alpha/Matte/Mask, it all means the same thing. In a matte, which is an 8 bit image (0-1) white = 1 and black = 0. So when roto shapes are used to block out parts of an image, they are 8 bit grayscale images (values 0-1). The key thing for roto shapes are there consistency – I’ll look at rotoscoping later!

Image aspect ratios – HD – square pixels; NTSC/PAL – not square pixels!

Colour Space – sRGB/Linear – designed to match home/office viewing conditions with gamma 2.2 applied to digtal images. You need to convert DPX footage to work on and back again later etc.

Passesmaster beauty pass is alays used as a reference pass!

The equation for the over state in compositing  – Over = (A x M) + [(1 – M) x B]

Jin showed us some of his work pre and post production, sadly due to it’s confidential nature, I can’t post it up on the blog. Hopefully he will post some notes soon to the VLE to fill in my gaps!

Tuesday 12th November

Today saw Axis do a small talk, mainly for the 4th year undergraduates, who are being encouraged to take part in a mentoring scheme for a period of around 5 months, 1 day a week. We had not been informed of this at all, however, as we may be linking up with Axis during the going live project, we may yet have some kind of opportunity to work with them in a small but not insignificant way. As good as the mentorship scheme would be, I do not have a suitable reel of shots to use anyway, as I feel the work I did previously is of a lower quality than, hopefully, what I’m producing during this course.

Elsewhere this week, I have been continuing with the lessons on Nuke.

Creating a Basic Track

To begin with, remove all the current properties panels to keep it tidy. Parralax – this term refers to the fact that objects in the nearer view tend to move more than objects further away from the moving camera, due to the nature of perspective. In this case, we want to track things close to the robot in the near view as it’s still hovering above the ground when it stops moving.

  • we’re going to use a little bit of dirt on the ground right next to the robot as our tracking point
  • Zoom in and centre it
  • Open a tracker node (only 1 pipe)
  • Drop it onto the read node with the background image
  • Add a track (in properties)
  • In the main track crosshair, pull on to the dirt piece
  • There are 2 areas here – outer – the search area (don’t make it too big) can drag in and out – make it smaller. Inner – pattern area, can make this smaller too
  • There’s now a new series of tools above the viewer, click ‘track to end’
  • The result is a bit jittery – we want to smooth it out
  • Select the transform tab (properties) – 3 options – T (transform), R (rotate), S (scale)
  • Try 1 for the transform to smooth it out, maybe 2, but not any more than that!
  • The red line shows the original track, the grey line shows the smoothed one.
  • We haven’t added a transform node yet to move the robot along with the tracker – that’s next!

How do we move images in our frame?

Add a transform node by using the double headed arrow in the toolbar (top icon), or press ‘T’ on the keyboard.

  • Need to put this after merge 3 node (with all the robot imagery) and before the background.
  • To reorganise if lacking space – to select everything upstream (above the node you want to move), hold CTRL + click the appropriate node – everything upstream will be highlighted! Move it to the right.
  • Drop the transform node onto the A pipe between nodes and it will slot in
  • Transform node – zero everything out (except scale – 1)
  • To apply tracking data, right click over the curve line to the right of translate – link to – tracker 1 – translate 



In this example scene, we want to desaturate the red truck behind the robot. You may want to hide the robot node – ‘D’.

  • Zoom in to it on frame 1
  • Go to the draw tool – roto node (‘O’ on the keyboard) – don’t need to drop it in quite yet
  • To the left of the viewer you have additional tools relating to the roto node – selection/points/curve type
  • Choose bezier here, most control
  • We want a closed shape – bezier – click drag a point to change the curve tangent
  • CTRL click only moves 1 handle
  • Move to the end frame 70 (little blue markers indicate a key frame)
  • Select all – draw a box around the roto
  • On frame 70 – LMB into position
  • Zoom in on the bits that don’t match and move into position
  • Play it and check it!

Colour Correction

  • First – drop a colour correction node on the background  B pipe before the merge 1 node
  • Properties – Master (just now)
  • Saturation – desaturate, e.g. 0.15 – but, the whole scene goes black and white! We haven’t used the roto yet!
  • We need to use the Mask port (both color correct and roto have one!)
  • Grab mask port on the right from the color correction node onto the roto node, so roto is pointing to CC

Now, we’re going to Color Correct the robot!

  • the black areas are too dark
  • there is a better node for this – Grade
  • If you have the robot selected, it automatically drops in place. If not, drop it after the read node, before the merge
  • we’re going to use the Lift attribute
  • select the colour picker on the right, then select a black section from the rest of the scene (not the robot), hold CTRL and drag over a section
  • the whole scene gets lighter because we haven’t masked off the robot layer yet (which has a black background)
  • UNDO!
  • use the robot alpha – properties
  • mask – rgba.alpha
  • redo sample lift
  • turn on/off grade node to see the difference

mask port roto to CC

Also, sample the ‘gain’ white colour in the backgound as it’s a bit ‘yellower’ than the robot! A very subtle difference!


In other news, I downloaded UVLayout and installed it. Here are some useful shortcuts and notes concerning its use:

Selecting and Cutting

  • There are 3 views – UV view, Edit view and 3D view. You can use 1, 2 and 3 on the keyboard to select them.
  • T – cycle between textures
  • C – cutting tool – hold down C and move the mouse over the edges you want to cut, press enter when you selected the loop you want to separate
  • Space + MMB to separate and move the sections out of the way
  • Shift + S is the split tool – it will open up a seam. For example, you might just want to split an inside leg rather than detach it completely
  • Backspace – this will ‘rub out’/delete the selected edges (or CTRL + Z)
  • W – unselects cut edges too!


  • D key – drop the edit version into UV space
  • F – flatten as long as you hold it
  • Run optimise – run for 1 minute optimises flattening for the uv’s
  • Coloured textures represent distortion – blue is compressed, red for stretched

Split and Stretch

  • Shift + S (cuts open seam)
  • Shift + F (space bar to stop) stretches out and then flattens – his may have to be done a number of times depending on the complexity of the UV’s

Welding Shells

  • W key – select edges, it doesn’t join them, just selects
  • Move the related edges into position and press Enter to join them
  • If anything is stretching, press F again (wait till the flatten value gets below 1)

Texture Maps

  • T with ‘-‘ or ‘=’ can scale the textures up or down to suit
  • ‘About UV Layout’ button – preferences – Map – the white arrow to the right will let you load in your own image to view on the model
  • ‘Show’ button to check it, then select
  • To reset, click the Map button
  • ‘Trace’ button – this will let you place your texture behind the UV’s in UV space
  • To make it or the UV’s more transparent, use the adjoining sliders

Tweaking UV’s (if you have overlapping UV’s etc)

  1. CTRL + MMB – move vertex
  2. CTRL + Shift + MMB – move group of vertices (4 adjacent)
  3. Shift + MMB  moves all within a radius – you mcan change the radius by using ‘-‘ or ‘=’
  4. Home key + LMB – when you point at an area this zooms to that area to rotate around it. Point at nothing + Home button – recentres the rotate pivot
  5. Hide – H + LMB, hides all selected
  6. H + RMB, hides all but selected
  7. H + U – reveals all again!

Pinning boundaries

  • The p key!
  • If you select 2 separate points on a boundary, press pp (double click p) and it will select the points in between, or Shift + p. Remember and reflatten!
  • For points in the middle, Shift + p in a blank part, then drag over points you want pinned
  • Or – G key, to selct the faces, then hit the g lkey again over a blank part, shift + p to pin
  • Shift + p and RMB drag to unpin

UV’s – The Basic Workflow

I had been going through the tutorials for UVLayout, and realised tht I needed to clear up exactly how I as going to link it up with Maya to apply the flattened uv’s and so on. So, here is the general workflow just to remind myself:

  1. Select the polys that you intend to flatten
  2. Export them as an obj file (nothing switched on in the options)
  3. Flatten the uv’s in uvlayout
  4. Save them again as an obj
  5. Import back into a Maya scene
  6. Open up the UV editor and take a snapshot (2K minimum)
  7. Create the texture in photoshop
  8. Load into a shader and apply to the imported mesh

Nuke – RotoPaint Node to Clean Footage errors

So, back to the Nuke lessons – say you want to blur something out, for example, the big P in the background – it could be that you have a logo in the scene which you’re not allowed to use or something…

  • Add a rotopaint node, don’t connect it yet
  • can use the blur tool (left of the screen)
  • Plug in the node before the Color Correction (doesn’t really matter) and after the background read node
  • NB – don’t paint anywhere unless you start on frame 1 (in this case anyway)
  • Paint out the ‘P’ on frame 1 and blur it out (a blue mark indicates a key frame)
  • However, need it to last until the last frame
  • Rotopaint properties – lifetime tab – all frames
  • At frame 70 select transform tab
  • select all (top left of screen tools) – click and drag over ‘P’ to set your new key frame

roto paint node

Editing a difficult track for a gun flare

  • Start on frame 20. Add tracker node
  • You want to base it on the robot and the heat pass combined, so we could use merge 3 node but also is a transform node moving both. So, plug the tracker into the transform node (below)
  • Add track from the properties, centre it over the heat pass, make the search area a bit bigger as it bounces about a little bit
  • Click ‘track to end’ (just above screen – looks like a play button)
  • Check to see if it has moved off centre, if so, go to the frame it goes off, click the clear fwd button which deletes the keys after
  • Move tracker back to the centre and track fwd again. Alternatively, if this doesn’t work, track fwd 1 frame at a time
  • Instead of smoothing it this time, if you want o move one point, make sure the track is selected, zoom in and move it at the appropriate frame.
  • Go back to frame 20
  • We want to track back to frame 16
  • Hit the track back button 1 frame at a time – edit as necessary

editing a track

Creating a Flare Node for the Gun

  • Add a flare node – drop it in after the transform node
  • Properties – position – curved line – link to – tracker 2 – track 1
  • Presets – click rings/lgRainbow then glowballs/bright – this will combine the 2 presets
  • You can edit the position etc by going to the multi tab
  • Change angle
  • Flare tab – radius/anamorph/chroma spread, brightness etc. Inner falloff – 0. Outer falloff – (see photo below)

first flare

  • Now we’ll add a second flare (by selecting the 1st + enter to link to it)
  • link to tracker 2 – track 1 again
  • This time just choose glowballs – bright
  • Turn the radius down so it fits in the gun
  • Next – need to hide the flare up to the frame that we need it!

second flare

Friday 15th November – Physically correct lighting and linear workflow

Kieran Baxter delivered his part 2 lecture on rendering/lighting/compositing today. The example scene today was a selection of simple torus’ with different shaders applied, sitting on a plane. there were just 2 point lights in the scene. So, to set the scene up correctly:

  • Turn on final gather (FG) – indirect lighting tab
  • Set secondary diffuse bounces to 2
  • We need to check the scale units in the scene for the lighting – change units to metres, where 1 unit = 1m
  • Use the measure tool to check the height of the room – use the snap tool (‘V’). It’s around 3.05m
  • Delete the locators once finished checking
  • Point 1 light – decay rate – quadratic (this uses the inverse square law)
  • Point light 2 – same as above!
  • The scene will go dark here, so we need to up the intensity of the lights
  • FG – quality – filter – 2
  • Gamma correction – sRGB 2.2 gamma to make the curve linear (corrects the original curve)
  • [Color management – common render settings]
  • Enable color management
  • Linear sRGB (input)
  • sRGB output – so we can see it on the monitor properly, even though we’re working in linear
  • The render looks bleached out! This is because all opbject values have already been gamma corrected themselves! We need to check all inputs!
  • Floor image – color profile – sRGB
  • Phong3 (for the blue torus’) – the color history saves he original colour in the little boxes marked with the ‘x’. This saves them for reselection at a later time – very useful!
  • Utilities – Gamma correct – reselect the old colour!
  • Type in 0.45 for all 3 values

Now to create new render layers for rendering! Make 3 new layers aside from the master layer: (the following notes will be updated shortly!)

  1. One with all geometry and camera but no lights – utility
  2. One with each light and all geometry
  3. Same as above
  • Remove the floor from the utility layer
  • Create shaders for the floor (black matte)
  • Create shaders for red, green and blue (out colours)
  • Assign black to the floor
  • Assign other colours to different torus colours – this will only apply to the current render layer!
  • Set up passes – camera depth, diffuse, specular, indirect, reflection
  • Turn on the master layer as well to render
  • Before rendering change output profile back to linear sRGB (we’ll correct it in compositing)
  • Final output – RMB over the file name prefix can offer different options!
  • Image format – because it’s linear – open EXR format
  • Double click on diffuse pass – 16 bit float (buffer type [frame buffer type] – fine!)
  • Depth is 32 bit float default
  • Quality tab – bottom – frame buffer – data type – RGBA (float) 4 x 32 bit

Batch render the files! next, we’ll read the files into Nuke. TRhe EXR pass information is stored in the file. Top left of the screen there is a drop down list – access all passes!

  • Light 1 node – shuffle node (Read 1 here)
  • We’re going to shuffle out and combine diffuse/specular etc together
  • Add a grade node – chnage RGB – All (fstops top left)
  • Add a backdrop node to help with organising the nodes visually!
  • ZDefocus – output – focal plane set up [change back to result]
  • Focal point – turn on/off  – ‘0’
  • Write node – write to sRGB default back to sRGB save convention ***_V01

On friday afternoon, Phil introduced us to InDesign, the program that has been recommended to use for the Research poster for Jeanette’s class. What follows are a few useful tips:

  • For A1 size, use custom 841 x 594mm with 3 columns for text and images, 10mm gutter
  • layout – margins/columns for editing
  • The red edge is the bleed edge for printing purposes
  • Use the guides – these can be useful for constraining where text boxes should go etc. (Images – file – place for placing images)
  • View – extras – hide frame edges (cmd + 🙂
  • At the bottom of the tools there is the preview mode
  • Borders on images can be a useful way of defining the space between the text and the images – InDesign works in a similar manner to Illustrator in that everything has a fill and a stroke
  • We need to bring in the DJCAD logo as well as a Dundee University logo – EPS file are transparent
  • View – display mode – high quality – in case the images you bring in don’t look right once scaled etc, try using the HQ
  • You shold have the following as headings and sub headings:
  • Title – what the project title is
  • Authors – My name, faculty etc
  • Purpose/objective/Introduction – maybe the abstract
  • Methods – what you intend to use
  • Results and findings
  • Discussion
  • Conclusion/summary
  • Acknowledgements/references
  • Logos!

Once you have completed the poster, file – export – adobe pdf print – save! make it a HQ print, turn on marks/bleed edges and export.

Try to keep the paragraphs a rerasonable size, not too big! Keep it consistent in terms of fonts, size etc. Be wary of using background images! Maybe simple is best? Keep everything in layers for easier editing. Finally, keep InDesign and pdf versions of the file.