Processing HDR data

HDR sensing

For the basic part of the lab, you will use video data provided, or you can also collect your own data.

Task:

Use multiple differently gained measurements of the same physical phenomenology to estimate an underlying truth.

In general, you will have differently gained measurements, such as for example, 3 different gain settings, low (1), medium (2), and high (3) gain. Use your own data if you like, or you can use provided data. What you observe, as a voltage, v, is some function, f, of the underlying physical quantity, q:
v1(x)=f(k1q(x));
v2(x)=f(k2q(x));
v3(x)=f(k3q(x)).
where k is the gain setting.

Each of these 3 measurements provide different information about the true reality in the universe. Specifically, each provides a different estimate of q:
q1= 1k1 f-1(v1);
q2= 1k2 f-1(v2);
q3= 1k3 f-1(v3).

Your job is to combine these multiple measurements into a single estimate of the true quantity or meta quantity, q.

You could simply average them, but the average would be unduly influenced by clipping (saturation) or cutoff, i.e. when the output voltage is too high or too low.

What we would like to do is put greater emphasis on more moderate voltages, and lesser emphasis on less moderate voltages.

Devise a way to combine these values to put greater emphasis on moderate voltages and lesser emphasis on extreme voltages.

Video data

Take a look at the video data in:
http://wearcam.org/ece516/ECE516lab05videodata/
and observe that there are 12 exposures, v1 through v12. These videos were all taken at 60 frames per second, with shutter speeds (exposure durations) as follows:
1/4 second;
1/8 second;
1/15 second;
1/30 second;
1/60 second;
1/125 second;
1/250 second;
1/500 second;
1/1000 second;
1/2000 second;
1/4000 second;
1/8000 second.

Note that in the shorter exposure duration such as 1/8000 second, we can see the light bulb filament very clearly.

In the longer exposure duration such as 1/4 second, we can see the darker background, such as the "SECRET NUMBER" near the upper right hand corner of the image.

Since the subject matter is stationary across all frames, average together all the valid video frames that were taken at 1/4 second, to generate an image (picture) of dimensions 1080 by 1920, and call that v1 (if there is an abruptness at the beginning or end, ignore the first and last frames, i.e. don't include those in the sum).

Average together all the frames that have a duration of 1/8 second, and call that v2.

Continue in this way to generate 12 still pictures, v1 through v12.

What is the relationship between these images v(x,y) and the truth, q(x,y)?

Clearly the shorter exposures provide information not present in the longer exposures, because when the pixel values are uniformly 255 (maximum value), information has been lost. Do the longer exposures provide any information that cannot be discerned from the shorter exposures. For example, does v2 have anything in it that can't be determined from v1?

What is the best way to estimate q(x,y) from these images?

Grading

Signal averaging: for each video, average together all the frames in the video (except first and last if questionable) to get a single still picture from that video. You should end up with 12 still pictures each having pixel dimensions 1080 high by 1920 wide. 3/10

Compute the 11 pairwise comparagrams for the fixed k=2. Are they the same? What is their relationship? 2/10

Determine or find the camera response for the Sony A7 Rii, 1/10

Anti-homomorphic signal averaging: For each of the 12 videos, average the frames together anti-homomorphically. Retain full precision. Compare for example v12 with f(2f-1(v11)). Do the slower exposures provide any new information not available in the faster exposures up to 1/60 second? (Obviously they do beyond 1/60 second). Explain. 4/10


Post your results to the Instructable, https://www.instructables.com/id/Quantimetric-Image-Processing/