How does one programmatically determine whether an image is "underexposed" or "overexposed"? My natural intuition is some sort of ML model, but feels like there should be a very effective non-ML approach to HDR.
StephenYangjz
To answer @smsunarto's question, I think it's not that complicated. I think most cameras do it by calculating the exposure histogram and looking at the area of the picture that's on the lower end of the histogram.
weiweimonster1130
I think HDR is a term that everyone has seen it in the past, but never really understand what it is. It shows up your phone, games, and monitor. And these slides really help me understand the definition of HDR
chethus
How easily are humans able to tell when HDR is used? These images feel a bit more intricate and detailed to me than the way humans usually view the world.
aramk-hub
Adding on to chethus' question, I've played video games and watched videos about their graphics and how things look with HDR on/off etc. While I know what HDR does, I agree that I probably can't spot it that well if it was on/off unless I am shown the different screens (unless I am that used to it). Is there a telltale sign to notice this?
adityaramkumar
What's the difference between HDR and just increasing the exposure? Could you just create a super HDR image by increasing the exposure a lot, since it seems like that's whats being done here?
Also, it seems like this is being done in software - would it make more sense to do this in hardware?
seenumadhavan
@adityaramkumar You probably would not want to do that since if you increase exposure across the image, the houses on the left will be sufficiently bright, but the houses on the right will get overexposed (too bright). Instead, multiple images are taken at different exposures and stitched together so underexposed regions are brightened, and this has to be done in software.
How does one programmatically determine whether an image is "underexposed" or "overexposed"? My natural intuition is some sort of ML model, but feels like there should be a very effective non-ML approach to HDR.
To answer @smsunarto's question, I think it's not that complicated. I think most cameras do it by calculating the exposure histogram and looking at the area of the picture that's on the lower end of the histogram.
I think HDR is a term that everyone has seen it in the past, but never really understand what it is. It shows up your phone, games, and monitor. And these slides really help me understand the definition of HDR
How easily are humans able to tell when HDR is used? These images feel a bit more intricate and detailed to me than the way humans usually view the world.
Adding on to chethus' question, I've played video games and watched videos about their graphics and how things look with HDR on/off etc. While I know what HDR does, I agree that I probably can't spot it that well if it was on/off unless I am shown the different screens (unless I am that used to it). Is there a telltale sign to notice this?
What's the difference between HDR and just increasing the exposure? Could you just create a super HDR image by increasing the exposure a lot, since it seems like that's whats being done here?
Also, it seems like this is being done in software - would it make more sense to do this in hardware?
@adityaramkumar You probably would not want to do that since if you increase exposure across the image, the houses on the left will be sufficiently bright, but the houses on the right will get overexposed (too bright). Instead, multiple images are taken at different exposures and stitched together so underexposed regions are brightened, and this has to be done in software.