Objects Removal from Crowded Image Background

Occasionally, while taking a photo, unwanted objects enter the frame.

For example, when taking pictures using smartphones, in surveillance cameras, etc.

The project's goal is to allow a user to interactively remove objects from an image background in order to get a clean shot.

The next part of this project would include development of an Android application which implements the current project.

The process in which the object is removed starts with taking a short video, in which the last frame is the user's desired photo. The next step is foreground/background segmentation, to discover the unwanted objects using different algorithms. The final step is object removal, in which the object is replaced by its background from another frame, and possible image matting process to improve the final result.

During the project, 12 videos in different difficulty levels were filmed in order to test the results in different conditions.

The results are good when using easy/medium videos (static camera, not too crowded area) and require improvement when there are hard conditions (trees, flags in background, unstable camera, etc.)
Introduction
When using the smartphone's camera to take a photo, usually unwanted objects enter the frame, such as unwanted cars on the road, people who walk in the background of the frame, etc.

Another example is when using surveillance cameras, sometimes an image free of unwanted objects is desired.

The project's goal is to deal with those situations in a way that allows the user to select the unwanted object that he wants to remove from a list of unwanted objects, and remove him in a way that the background of the objects is completed from another frame.

For example: (taken from Scalado application which claims to have similar capability)

Objects Removal from Crowded Image Background
 

First, the user selects an object to remove:

Original Image

Then, the object is removed:

final result

The solution (or the basic approach)
Our block diagram:

Our block diagram

The solution consists of 4 major parts:

First is the foreground/background segmentation.

In order to mark the unwanted objects, we used 3 different methods for the segmentation:
The first method is the median method. We first create the median image, which is the image which consists the median value of each pixel over time.
Then, we subtract the current frame from the median image and use binarization to create the final segmentation result:

original number 2

BG subtract 2

The second method is the Mixture of Gaussians method, in which we assume each pixel has probability distribution of mixture of gaussians. Its parameters update in time. We sort the gaussians by their weight divided by their variance. In that way, the first B gaussians represent the background distribution. B is a given threshold.

The segmentation result:

segmented 2

The third segmentation method is PBAS, like MoG this method uses parameters updated in time. Each pixel contains an array B of its past N background values, which updated randomly. The decision whether to classify a pixel background or foreground is based on how many values of this array are close enough to the current pixel value. The threshold parameter R, like the learning parameter T which controls the learning rate are also updated over time.
The segmentation result:

Segmented 3

The second part of our solution is morphological operations in order to get rid of noise and get clearer segmentation of the unwanted objects.

We used Erosion to get rid of the noise, and dilation in order to complete the objects.
After Erosion:

with erosion

After Dilation:

after dialation

The third part of our solution is the object removal part.

The frame which is selected to replace the background of the selected object is the frame which is the closest to the median image. Then, the object’s block is replaced with the same area in the selected frame.

original number 2

The fourth part of our solution is the Image matting part.

In order to create a smooth transition between the replaced area and the original area, we use Image Matting algorithm which combines the color and texture features of the image to create a smooth transition between the foreground and background, around the object’s edges.
The result:

final result

We created a GUI in MATLAB, which allows the user to interactively select a video, select an object to be removed, remove him with or without the matting process, view and save the resulting image:

final result with GUI

 

Conclusions

The main assumptions of the project were that the camera is static and the background of the image is shown through most of the image.

In videos in which those assumptions are correct, the results are generally good.

When the assumptions are broken, sometimes the segmentation result is not accurate, in a way that can’t be fixed by morphological operations.

In general, the results are good, and in the future, this project can be developed into a smartphone application.

References

[1] Chris Stauer & W.E.L Grimson, “Adaptive background mixture models for real-time tracking”, IEEE, 1999

[2] Martin Hofmann, Philipp Tiefenbacher, Gerhard Rigoll, “Background Segmentation with Feedback: The Pixel-Based Adaptive Segmenter”, 2012

[3] Ehsan Shahrian and Deepu Rajan, “Weighted color and texture sample selection for image matting”, IEEE, 2012