Apologies if this was something that was mentioned in lecture that I forgot about, but it looks like the pixel sensors are staggered/slightly offset from row to row. Is this something that needs to be compensated for in software, or does this process of mapping rays already account for it, or maybe does it not even need to be accounted for at all?
ncastaneda02
I believe this is layout an optimization that allows for images with different exposures at once which can then be combined to produce a super high resolution image. From what I can tell, these versions are overlayed in hardware using some fancy signal processing algorithms, but I could be misunderstanding that. If you want to read more about this, here are a couple of articles I read while looking into this:
https://www.techbriefs.com/component/content/article/tb/supplements/pit/features/applications/37185
https://www.researchgate.net/publication/220050550_Superresolution_reconstruction_of_a_video_captured_by_a_vibrated_time_delay_and_integration_camera
Apologies if this was something that was mentioned in lecture that I forgot about, but it looks like the pixel sensors are staggered/slightly offset from row to row. Is this something that needs to be compensated for in software, or does this process of mapping rays already account for it, or maybe does it not even need to be accounted for at all?
I believe this is layout an optimization that allows for images with different exposures at once which can then be combined to produce a super high resolution image. From what I can tell, these versions are overlayed in hardware using some fancy signal processing algorithms, but I could be misunderstanding that. If you want to read more about this, here are a couple of articles I read while looking into this: https://www.techbriefs.com/component/content/article/tb/supplements/pit/features/applications/37185 https://www.researchgate.net/publication/220050550_Superresolution_reconstruction_of_a_video_captured_by_a_vibrated_time_delay_and_integration_camera