It's incredible how acute our visual processing systems are. Even with such high levels of advancement and simulation, it's still at this point in time, still decently easy to pick out a generated object versus a real imaged object.
I'm curious as to how many fibers, samples/pixel, and min/frame you would need to not be able to tell the difference.
For fur models, isn't the physical calculations for even samples/pixel per fiber GPU intensive? I remember that AI rendered scenes were becoming more common --