This is an archive of a past semester of this course. Go to the current semester.
Assignment 3: PathTracer

Due Date

Mon March 28th, 11:59pm

Overview

You will implement the core routines of a physically-based renderer using a pathtracing algorithm. This assignment reinforces many of the hefty ideas covered in class recently, including ray-scene intersection, acceleration structures, and physically based lighting and materials. By the time you are done, you'll be able to generate some stunning pictures (given enough patience and CPU time). You will also have the chance to extend the assignment in a plethora of technically challenging and intellectually stimulating directions.

Project Parts

This time, we've split off each part into its own article:

All parts are equally weighted. You'll also need to read these articles:

Using the program

Download the zipped assignment or clone it from GitHub:

git clone https://github.com/CS184-sp16/asst3_pathtracer.git

As before, use cmake and make inside a build/ directory to create the executable (help article).

Command line options

Use these flags between the executable name and the dae file when you invoke the program. For example,

./pathtracer -t 8 -s 64 -l 16 -m 6 -r 480 360 -f spheres.png ../dae/sky/CBspheres.dae

For this assignment, we've provided a windowless run mode, which is triggered by providing a filename with the -f flag. The program will run in this mode when you are ssh-ed into the instructional machines.

This means that when trying to generate high quality results for your final writeup, you can use the windowless mode to farm out multi-hour render jobs to the s349 machines! You'll probably want to use screen to keep your jobs running after you logout of ssh. After the jobs complete, you can view them using the display command, assuming you've ssh-ed in with graphics forwarding enabled (by using the -X flag).

Also, please take note of the -t flag! We recommend running with 4-8 threads almost always -- the exception is that you should use -t 1 when debugging with print statements, since printf and cout are not thread safe.

Flag and parameters Description
-s <INT> Number of camera rays per pixel (default=1, should be a power of 2)
-l <INT> Number of samples per area light (default=1)
-t <INT> Number of render threads (default=1)
-m <INT> Maximum ray depth (default=1)
-e <PATH> Path to environment map
-f <FILENAME> Image (.png) file to save output to in windowless mode
-r <INT> <INT> Width and height of output image (if windowless)
-h Print command line help message

Moving the camera (in edit and BVH mode)

Command Action
Rotate Left-click and drag
Translate Right-click and drag
Zoom in and out Scroll
Reset view Spacebar

Keyboard commands

Command Keys
Mesh-edit mode (default) E
BVH visualizer mode V
Descend to left/right child (BVH viz) LEFT/RIGHT
Move up to parent node (BVH viz) UP
Start rendering R
Save a screenshot S
Decrease/increase area light samples - +
Decrease/increase camera rays per pixel [ ]
Decrease/increase maximum ray depth < >

Basic code pipeline

What happens when you invoke pathtracer in the starter code? Logistical details of setup and parallelization:

  1. The main() function inside main.cpp parses the scene file using a ColladaParser from collada/collada.h.
  2. A new Viewer and Application are created. Viewer manages the low-level OpenGL details of opening the window, and it passes most user input into Application. Application owns and sets up its own pathtracer with a camera and scene.
  3. An infinite loop is started with viewer.start(). The GUI waits for various inputs, the most important of which launch calls to set_up_pathtracer() and PathTracer::start_raytracing().
  4. set_up_pathtracer() sets up the camera and the scene, notably resulting in a call to PathTracer::build_accel() to set up the BVH.
  5. Inside start_raytracing() (implemented in pathtracer.cpp), some machinery runs to divide up the scene into "tiles," which are put into a work queue that is processed by numWorkerThreads threads.
  6. Until the queue is empty, each thread pulls tiles off the queue and runs raytrace_tile() to render them. raytrace_tile() calls raytrace_pixel() for each pixel inside its extent. The results are dumped into the pathtracer's sampleBuffer, an instance of an HDRImageBuffer (defined in image.h).

Most of the core rendering loop is left for you to implement.

  1. Inside raytrace_pixel(), you will write a loop that calls camera->generate_ray(...) to get camera rays and trace_ray(...) to get the radiance along those rays.
  2. Inside trace_ray, you will check for a scene intersection using bvh->intersect(...). If there is an intersection, you will accumulate the return value in Spectrum L_out,
    • adding the BSDF's emission with bsdf->get_emission() if appropriate,
    • adding direct lighting with estimate_direct_lighting(...), and
    • adding indirect lighting with estimate_indirect_lighting(...), which will recurse to call trace_ray once more.

You will also be implementing the functions to intersect with triangles, spheres, and bounding boxes, the functions to construct and traverse the BVH, and the functions to sample from various BSDFs.

Approximately in order, you will edit (at least) the files

  • pathtracer.cpp (part 1)
  • camera.cpp (part 1)
  • static_scene/triangle.cpp (part 1)
  • static_scene/sphere.cpp (part 1)
  • bvh.cpp (part 2)
  • bbox.cpp (part 2)
  • pathtracer.cpp (parts 3-4)
  • bsdf.cpp (part 5)

You will want to skim over the files

  • ray.h
  • intersection.h
  • sampler.h/cpp
  • random_util.h
  • static_scene/light.h/cpp

since you will be using the classes and functions defined therein.

Rendering Competition

For this assignment's rendering competition, we will require you to submit both a competition.png image and a short 5-10 sentence description of what you did to create your entry. You can choose to either emphasize the technical or artistic/modeling merits of your image. If you go the artistic route, you should generate the model yourself using Blender or procedural code that you write. If you go the technical route, you may use models downloaded from the internet, but you should emphasize the extra algorithms you implemented in your short writeup.

Submission Instructions

Log in to an instructional machine, navigate to the root directory asst3_pathtracer of your project, and run

zip -r hw3.zip src
zip -r hw3.zip website

Make sure that all images referenced by your website are inside the website/images/ directory! Please also include any new or modified .dae files you create in a dae/ directory. Also, don't forget your competition.png image! You can inspect what's inside your zip file by running

unzip -l hw3.zip

When your zip file is ready, run

submit hw3

This indicates you have succeeded:

Looking for files to turn in....
Submitting hw3.zip.
The files you have submitted are:
    ./hw3.zip 
Is this correct? [yes/no] yes
Copying submission of assignment hw3....
Submission complete.

There are more detailed instructions (including how you can use scp to copy your files onto the s349 machines) here.