The "Black Art" of Deinterlacing
Interlacing introduces visible problems in vertical details that are
hard or even impossible to fix. If no motion occurs in the original
frames, then weaving adjacent fields back together produces a perfect
looking frame. However, if there is horizontal motion in the scene, combining two
fields produces a visually objectionable "combing" effect in the moving
areas.
The process of interlacing frames into fields discards lines, which is
a form of vertical sub-sampling, so an effect called spectral aliasing
occurs. As a consequence, it is sometimes theoretically impossible to
reconstruct the original frames. For example, if a vertical camera pan
of exactly an odd number of lines per field occurs, then each field has
essentially the same information as its neighboring fields (other than
new and lost image content appearing at the top and bottom lines). Therefore
weaving fields in order to obtain double the vertical resolution does
not work.
An alternative approach attempts to try to convert each field into a
frame independently using a so-called "bob" interpolator. However,
faulty bob reconstruction can lead to discrepancies in the value of a
pixel between adjacent reconstructed frames, giving a flickering effect
at half the field rate. YOU CANNOT JUDGE THE QUALITY OF A DEINTERLACER FROM VIEWING A SINGLE OUTPUT FRAME!
Advanced techniques such as
inpainting can
reduce the effect in some situations, but it is computationally
intensive, and does not eliminate the problem. In the example of
odd-line vertical motion above, each field is the same, so bob flicker
is eliminated in this case, but the scene may still appear blurred.
So-called motion adaptive deinterlacers detect if there is motion,
and
bob to create a frame, otherwise they weave to get detail when there is
no motion. However, detecting motion reliably is an ill-posed problem,
so
incorrect switching between methods can occur, causing visual problems.
Furthermore if motion is fairly slow, bob flickering can still be
visually objectionable.
A more sophisticated approach called motion compensated deinterlacing
uses motion estimation to obtain a dense motion field (similar to that
used in frame rate conversion) in order to weave a lot more of the
time. This results in much better vertical detail. The trade-off is
that considerable computation is needed to do a really good job.
Furthermore, motion will sometimes locally fail (possibly due to
occlusion or aliasing), leading to combing artifacts. Combing therefore
has to be detected and removed. Unfortunately combing corresponds to
the highest possible vertical frequency, so some slight loss of
vertical detail at the very highest frequency may be expected -
fortunately such content is rare in normal video.
Another problem is bright, thin diagonal lines. If their gradient is
shallow, and the camera is moving, then it is very difficult to
reconstruct the missing line information, causing a
roping
effect. The two main approaches are: (i) using motion estimation and
information from nearby fields, or (ii) attempting to detect diagonals
within each field, and fill in the missing information using diagonal inpainting.
Furthermore, it gets computationally more intensive for better results,
as the algorithms need to search further out to match shallower angles.