Computational Photography, Spring 2016

Final Project: Eulerian Video Magnification

Click here for source codes!

Date Submitted: 26 April 2016

445167 (Carol (Xu) Han)
446216 (Ying Wang)

Project Description

The goal of video magnification is to reveal temporal variations in videos that are difficult to see with the naked eye and visualize them in an indicative manner. Wu et al. 2012 has proposed an "Eulerian" Magnification Method which we have sought to implement. in contrast to the previous "Laplacian" approaches. (paper chosen: Video Magnification)
-- Courtesy of Course Website

In this assignment we use matlab to implement the required algorithm: Video Magnification.

Video Magnification

Concept

The intuitive approach to motion magnification is to identify points of interest between frames, track their movements and generate an optical flow field, and then amplify said field to produce the magnification effect. This is the core of the "Laplacian" method used in Liu et al. 2015. However, it is computationally intensive to derive and amplify said fields accurately, leading to the developments of the "Eulerian" approach utilized by Wu et al. 2012. The difference between the two approaches is analogous to the contrasting Laplacian and Euclidian perspectives in fluid mechanics. While the Laplacian approach tracks the movement of individual fluid particles, the Euclidian approach generalizes the same properties by analyzing the flow through voxels of fluid instead. Similarly, Eulerian Video Magnification does not identify explicit particles and track their movements, but instead analyzes the periodic intensity change of individual pixels over time and amplifies them.

To conclude the thesis, here are the main contribution of their approach:

Eulerian Amplification Pipeline pipeline

  • Spatially Decompose: Build Laplacian pyramids for image frames in the video.
  • Temporally Filter: Filter in the frequency domain. (for example, frequencies close to that of a heartbeat)
  • Amplify: Multiply said result by some constant.
  • Reconstruct: Add it to the Laplacian Pyramid and recompose frames.

Example Input Output

The video below shows how the algorithm analyze and reveal small imperceptible color and motion signals in videos.

Implementation

Parameters

  • alpha: Amplification factor
  • freqL: Low end of our band pass
  • freqH: High end of our band pass
  • chromAtten: similar to alpha but only applied to level reaching the threashold

Build Laplacian Pyramid

When building Laplacian pyramids, we chose the matlabPyrTools library to make the process more easy because it makes each level change from an image to a single vector. It reduces the overall running time. However, it makes us difficult to visualize each level in the Laplacian pyramid until we collapse them together in the end.

Decompose

pipeline2

Reconstruct

pipeline4

Bandpass Filter

We converted time series to the Fourier domain, applied some bandpass mask to each series, and converted it back to the time domain in order to speed up the process. We use the frame rate to figure out which frequencies we want to keep (based on high and low values) and we zero out the frequencies that occur outside of that range; in other words, the high and low values define a “band” of frequencies we want to pass through the filter, and all other frequencies are set to zero. Depending on which filters we use, it can be set to focus on color or movement.

Change Color Space

In the paper, for motion magnification it suggests us to change the current RGB color space to other color space(for example, NTSC, YIQ, Lab) to reduce those color changes we don't expect in motion magnification.

Choose Parameters Properly

For different videos, we should focus on choosing the right parameters to get better results. We need to pay attention to what range of frequencies we want to magnify.

Result

The videos below are about human movements, in those videos you can see that all movements are magnified, including muscle movements and position movements. In the first video you may see the moving trace of the baby becomes more clearly after the magnifying, in the second video of the arm the shadows grows larger and darker. In the third video, the muscle on the man's forehead twitches more obviously, and the forth video we can see that the movement of the stick is visually stronger.
Actually, instead of magnifying a "movement", this algorithm is rather magnifying a "difference" from one picture to the ones after it. This can also be proved in the failed results part.

Original Magnification

The videos below are about some pets. We can see that the algorithm works great on the breathing part of animals. In the mean time, since animals have more small movements, a sudden move of a pet may cause a strong blurry or shadow in the output video.

Original Magnification
Analysis
  • Large artifacts when amplifying motion. If the input video contains large motions, the magnified video will suffer from artifacts that known as ghosting effect. Maybe some foreground segmentation methods will eliminate those ghosting effect.
  • Similarly, the swaying of the camera may also cause ghosting effects.
  • This algorithm is sensible to noise.
Considerations
  • Minimize our camera and subject motion.
  • Higher resolutions may give a better signal detection.
  • Good lighting gives better results(higher ISO gives higher noise).

Failed results & Analysis

Most of our failed resualts are caused by the motion of the camera. Sometimes this will produce a weird texture in the video for example the man's skin texture in the following video. The motion of the video strengthens the “edges” of each frame. So even though the magnifying part still works cool, the whole video is little bit messy and becomes less enjoyable.


And another reason for a failed result is inappropriate parameters. Just show some failed but funny videos:

If you happen to choose a wrong bandpass parameter, you may get the results above.

Conclusions

Bell & Whistles(Extra Points)

Parameter Tests

We tried different sets of parameters to test the function of each parameter. Here are some results.

In this test, we use freqL(40/50) and freqH(50/50), which is close to the blood flow of face. Clearly, the result shows what we expected.

Then, we use another sets of freqL and freqH (2, 3), which is not close to the blood flow of face. We can see in the result video, only some irrelevant pixels are jumping ramdomly. It can not reflect the blood flow any more.

Thoughts

  • Add area of interest selection for video.
  • Try phase based approach for movement magnification.
  • In future, we'd like to try a way to detect heartbeats without running the magnification process for each input video.

IT'S THE END :))

We've learned so much and made some cool (or medium cool)stuff! Thanks to Professor Pless and TA Ian Schillebeeckx for teaching such a fun class!

Reference

  • [1] Wu, H.-Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F. and Freeman, W. 2012. Eulerian video magnification for revealing subtle changes in the world. ACM Transactions on Graphics (TOG). 31, 4 (2012), 65.