Deadlines
Project 3-1 is due Sunday 07/19/2020, 11:59PM. Both your code and write-up need to be turned in for your submission to be complete; assignments which are turned in after 11:59pm will use one of your slip days -- there are no late minutes or late hours.
Each student has five slip days total for the semester. Due to the increased pace of the course in Summer, you are only allowed to use up to two slip days on a single assignment.
Your final assignment submission must include both your final code and final write-up. Please consult this article on how to submit assignment for CS184.
Overview
You will implement the core routines of a physically-based renderer using a pathtracing algorithm. This assignment reinforces many of the ideas covered in class recently, including ray-scene intersection, acceleration structures, and physically based lighting and materials. By the time you are done, you'll be able to generate some realistic pictures (given enough patience and CPU time). You will also have the chance to extend the assignment in many technically challenging and intellectually stimulating directions.
This project is much longer than the other projects. Rendering images for your writeup will take several hours at the minimum, so start early! Make sure you read the Experiments, Report and Deliverables article ahead of time.
In addition, you may want to render using the instructional machines. Instructions for how to do this are in the how to build article.
Assignment Structure
This assignment has 5 parts. All parts are equally weighted at 20 points each, for a total of 100 points.
- Part 1: Ray Generation and Scene Intersection
- Part 2: Bounding Volume Hierarchy
- Part 3: Direct Illumination
- Part 4: Global Illumination
- Part 5: Adaptive Sampling
You will also need to read these articles:
It will be also very helpful to read this document on CGL vectors library:
In particular, please read the Experiments, Report, and Deliverables page before beginning the project. There are many deliverables for this project, so please plan accordingly. Several parts of this assignment ask you to compare various methods/implementations, and we don't want you to be caught off guard!
Getting Started
As in Assignment 1, you should accept this assignment on your CS184 website profile, follow the instructions on GitHub Classroom, and clone the generated repo (not the class skeleton). Make sure you enable GitHub Pages for your assignment.
$ git clone <YOUR_PRIVATE_REPO>
Please consult this article on how to build assignments for CS184.
We recommend that you accumulate deliverables into your write-up as you work through each part this assignment. We have included write-up instructions at the end of each part of this assignment.
Running the Executable
The executable, pathtracer
, must be run with a COLLADA file (.dae) when you invoke the program. COLLADA files use an XML-based schema to describe a scene graph (much like SVG). They are a hierarchical representation of all objects in the scene (meshes, cameras, lights, etc.), as well as their coordinate transformations.
There are many command line options for this project, which you can read about below. Use these between the executable name and the dae file.
For example, to simply run the regular GUI with the CBspheres.dae file and 8 threads, you could type:
./pathtracer -t 8 ../dae/sky/CBspheres_lambertian.dae
Unlike previous assignments, we've provided a windowless run mode, which is triggered by providing a filename with the -f
flag. This is useful for rendering if you're ssh-ed into an instructional machine, for example, or if you want to render on your own machine without opening the application window.
If you wanted to save directly to the spheres_64_16_6.png file with 64 samples per pixel, 16 samples per light, 6 bounce ray depth, and 480x360 resolution, you might rather use something like this:
./pathtracer -t 8 -s 64 -l 16 -m 6 -r 480 360 -f spheres_16_4_6.png ../dae/sky/CBspheres_lambertian.dae
Rendering with Instructional Machines
This means that when trying to generate high quality results for your final writeup, you can use the windowless mode to farm out long render jobs to the s349 machines! You'll probably want to use screen to keep your jobs running after you logout of ssh. After the jobs complete, you can view them using the display command, assuming you've ssh-ed in with graphics forwarding enabled, or scp them back to your local machine.
A Note about Multi-Threading
We recommend running with 4-8 threads almost always -- the exception is that you should use -t 1
when debugging with print statements, since printf
is not guaranteed to be thread safe. (However, cout/cin are thread safe) Furthermore, if you are using a virtual machine, make sure that you allocate multiple CPU cores to it. This is important in order for your -t
flag to work. In practice, it's good to assign cores to the virtual machine, where is the number of your physical machine's number of CPU cores or hyperthreaded cores.
Speeding up Rendering Tests
High quality renders take a long time to complete! During development and testing, don't spend a long time waiting for full-sized, high quality images. Make sure you use the parameters specified in the writeup for your website, though!
Here's some ways to get quick test images:
- Render with fewer samples!
- Utilize the cell rendering feature - start a render with
R
, then hitC
and click-drag on the viewer. The pathtracer will now only render pixels inside the rectangular region you selected. - Set a smaller window size using the -r flag (example:
./pathtracer -t 8 -s 64 -r 120 90 ../dae/sky/CBspheres_lambertian.dae
). Zoom out until the entire scene is visible in the window, then start a render withR
.
Using the Executable and Graphical User Interface (GUI)
Command line options
Flag and parameters | Description |
---|---|
-s <INT> |
Number of camera rays per pixel (default=1, should be a power of 2) |
-l <INT> |
Number of samples per area light (default=1) |
-t <INT> |
Number of render threads (default=1) |
-m <INT> |
Maximum ray depth (default=1) |
-f <FILENAME> |
Image (.png) file to save output to in windowless mode |
-r <INT> <INT> |
Width and height in pixels of output image (if windowless) or of GUI window |
-p <x> <y> <dx> <dy> |
Used with the -f flag (windowless mode) to render a cell with its upper left corner at [x,y] and spanning [dx, dy] pixels. |
-c <FILENAME> |
Load camera settings file (mainly to set camera position when windowless) |
-a <INT> <FLOAT> |
Samples per batch and tolerance for adaptive sampling |
-H |
Enable hemisphere sampling for direct lighting |
-h |
Print command line help message |
Moving the Camera (in Edit and BVH mode)
Mouse | Action |
---|---|
Left-click and drag | Rotate |
Right-click and drag | Translate |
Scroll | Zoom in and out |
Spacebar | Reset view |
Keyboard Commands
Key | Action |
---|---|
E | Mesh-edit mode (default) |
V | BVH visualizer mode |
← / → | Descend to left/right child (BVH viz) |
↑ | Move up to parent node (BVH viz) |
R | Start rendering |
S | Save a screenshot |
- / + | Decrease/increase area light samples |
[ / ] | Decrease/increase camera rays per pixel |
< / > | Decrease/increase maximum ray depth |
C | Toggle cell render mode |
H | Toggle uniform hemisphere sampling |
D | Save camera settings to file |
Cell render mode lets you use your mouse to highlight a region of interest so that you can see quick results in that area when fiddling with per pixel ray count, per light ray count, or ray depth.
Basic Code Pipeline
What happens when you invoke pathtracer in the starter code? Logistical details of setup and parallelization:
- The
main()
function inside main.cpp parses the scene file using aColladaParser
from collada/collada.h. - A new
Viewer
andApplication
are created.Viewer
manages the low-level OpenGL details of opening the window, and it passes most user input intoApplication
.Application
owns and sets up its ownpathtracer
with a camera and scene. - An infinite loop is started with
viewer.start()
. The GUI waits for various inputs, the most important of which launch calls toset_up_pathtracer()
andPathTracer::start_raytracing()
. set_up_pathtracer()
sets up the camera and the scene, notably resulting in a call toPathTracer::build_accel()
to set up the BVH.- Inside
start_raytracing()
(implemented in pathtracer.cpp), some machinery runs to divide up the scene into "tiles," which are put into a work queue that is processed bynumWorkerThreads
threads. - Until the queue is empty, each thread pulls tiles off the queue and runs
raytrace_tile()
to render them.raytrace_tile()
callsraytrace_pixel()
for each pixel inside its extent. The results are dumped into the pathtracer'ssampleBuffer
, an instance of anHDRImageBuffer
(defined in image.h).
Most of the core rendering loop is left for you to implement.
- Inside
raytrace_pixel()
, you will write a loop that callscamera->generate_ray(...)
to get camera rays andest_radiance_global_illumination(...)
to get the radiance along those rays. - Inside
est_radiance_global_illumination
, you will check for a scene intersection usingbvh->intersect(...)
. If there is an intersection, you will accumulate the return value inSpectrum L_out
,- adding the BSDF's emission with
zero_bounce_radiance
which usesbsdf->get_emission()
, - adding global illumination with
at_least_one_bounce_radiance
, which callsone_bounce_radiance
(which calls a direct illumination function), and recursively calls itself as necessary
- adding the BSDF's emission with
You will also be implementing the functions to intersect with triangles, spheres, and bounding boxes, the functions to construct and traverse the BVH, and the functions to sample from various BSDFs.
Approximately in order, you will edit (at least) the files
- pathtracer/pathtracer.cpp (part 1, 3, 4, 5)
- pathtracer/camera.cpp (part 1)
- scene/triangle.cpp (part 1)
- scene/sphere.cpp (part 1)
- scene/bvh.cpp (part 2)
- scene/bbox.cpp (part 2)
- pathtracer/bsdf.cpp (part 3)
You will want to skim over the files:
- pathtracer/ray.h
- pathtracer/intersection.h
- pathtracer/sampler.h
- pathtracer/sampler.cpp
- util/random_util.h
- scene/light.h
- scene/light.cpp
since you will be using the classes and functions defined therein.
In addition, ray.h contains a defined variable PART
. This is currently set to 1. You may use this variable if you like to test separate parts independently. For example, you can write things like if (PART != 4) { return false; }
to easily "revert" parts of your code. In particular, you will need to set PART
to 5 to get your rate sampling images in Part 5.