Image Segmentation¶
The main objectives of this module are:
- Use & compare different methods of segmentation:
- Histogram-based
- Texture-based
- Region growing
- Detect objects and extract object features.
- Understand corner detection and basic object recognition
1. Histogram segmentation¶
In histogram segmentation, we make the hypothesis that the histogram is composed of distinct separable distributions, and we try to find the best threshold to separate those distributions.
The code below uses a default threshold of 127 to segment the image. Modify it to:
- Create a function to compute the optimal threshold for an 8 bit image. Apply on the cameraman image.
- Compute the Otsu threshold for an 8-bit image, by optimizing within variance or inter-class variance for each possible theshold t. See here how to compute the Otsu threshold.
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from skimage.io import imread, imshow
def manual_threshold(im, T):
return im>T
def optimal_threshold(im, T0):
pass # TODO
def otsu_threshold(im):
pass # TODO
def plot_histogram_with_threshold(im, T):
plt.figure(figsize=(15,5))
h = plt.hist(im.flatten(), bins=range(256))
plt.plot([T,T],[0,h[0].max()], 'r-')
plt.show()
im = imread('camera.jpg')
T = 127
im_segmented = manual_threshold(im, T)
plot_histogram_with_threshold(im, T)
# Show original image & segmented binary image
plt.figure(figsize=(15,8))
plt.subplot(1,2,1)
imshow(im)
plt.subplot(1,2,2)
imshow(im_segmented)
plt.show()
Need more help? You can check the following videos:
2. Texture segmentation¶
Texture segmentation uses regional descriptors to segment the image based on the local texture. A simple algorithm is provided below, which:
- Extract neighborhoods with the sliding window method
- Compute the local maximum on the neighborhood and put it in a "descriptor" image
- Use Otsu thresholding on the descriptor image to segment it
- Display the results
from skimage.filters.rank import entropy
from skimage.filters import threshold_otsu
from skimage import img_as_ubyte
def texture_descriptor(N):
e = N.max() # Replace with your descriptor
return e
def sliding_window(im, PATCH_SIZE):
output = np.zeros((im.shape[0], im.shape[1]))
for i in range(0, im.shape[0]-PATCH_SIZE[0]+1, PATCH_SIZE[0]):
for j in range(0, im.shape[1]-PATCH_SIZE[1]+1, PATCH_SIZE[1]):
patch = im[i:i+PATCH_SIZE[0], j:j+PATCH_SIZE[1]]
output[i:i+PATCH_SIZE[0], j:j+PATCH_SIZE[1]] = texture_descriptor(patch)
return output
# Open zebra image as an 8-bit integer grayscale
im = img_as_ubyte(imread("zebra.jpg", as_gray=True))
im_descr = sliding_window(im,(120,160))
T = threshold_otsu(im_descr)
mask = im_descr>T
plt.figure(figsize=(15,8))
plt.subplot(1,3,1)
imshow(im)
plt.title('Original')
plt.subplot(1,3,2)
imshow(im_descr)
plt.title('Descriptor image')
plt.subplot(1,3,3)
imshow(im*mask)
plt.title('Result')
plt.show()
/home/olivier/.conda/envs/py3/lib/python3.7/site-packages/skimage/io/_plugins/matplotlib_plugin.py:150: UserWarning: Float image out of standard range; displaying image with stretched contrast. lo, hi, cmap = _get_display_range(image)
Using the above example as a starting point, replace the "maximum" texture descriptor by properties from the co-occurrence matrix:
- Compute the co-occurrence matrix on the neighborhood (see greycomatrix). Test different angles & displacements.
- Test different properties (see greycoprops)
Try to segment the zebra image as best as you can using those descriptors.
Need more help? You can check the following videos:
3. Region growing¶
In region growing algorithms, we start from "markers" which act as seed points, and grow the segmented regions from those markers. A well-known region-growing algorithm uses the watershed transform. The example below uses the watershed transform on the cameraman image, with hand-picked markers:
from skimage.morphology import disk
import skimage.filters.rank as skr
from skimage.segmentation import mark_boundaries, watershed
from skimage.io import imread
im = imread('camera.jpg')
smoothing_factor = 4
# Compute the gradients of the image:
gradient = skr.gradient(skr.mean(im, disk(smoothing_factor)), disk(1))
# Hand-picked markers for the road image
markers_coordinates = [
[10,256], # sky
[200,150],# cameraman
[400,20], # grass (left)
[400,450] # grass (right)
]
markers = np.zeros_like(im)
for i,(row,col) in enumerate(markers_coordinates):
markers[row,col] = i+1
ws = watershed(gradient, markers)
plt.figure(figsize=[8,8])
plt.subplot(2,2,1)
plt.imshow(im,cmap=plt.cm.gray);
plt.subplot(2,2,2)
plt.imshow(gradient,cmap=plt.cm.gray);
plt.subplot(2,2,3)
plt.imshow(ws);
plt.subplot(2,2,4)
plt.imshow(mark_boundaries(im,ws));
Adapt this method to work on the road image.
Can you find a way to automatically determine the markers?
im = imread('road.jpg')
imshow(im)
# Your code here
<matplotlib.image.AxesImage at 0x7fcad9ab2790>
Another use of the watershed transform is to separate overlapping object, as in the image below.
- Compute the distance transform of the image.
- Use the result to automatically find good markers.
- Use the watershed transform to separate the three objects.
from scipy.ndimage import distance_transform_edt
from skimage.color import rgb2gray
im = rgb2gray(imread('separ.png'))==0
imshow(im)
# Your code here
<matplotlib.image.AxesImage at 0x7fcad9a09810>
Need more help? You can check the following videos:
4. Object features¶
The next step after segmentation is often to extract object features in order to recognize, classify, or measure information about the objects.
Starting from the example below:
- Extract connected components (see label()) of the shapes image and display the centroid of the objects (see regionprops())
- For each image label, extract the coordinates of the contour (see find_contours()) and find the corners of each objects.
- Suggest a method to classify the objects in different categories.
from skimage.measure import label, regionprops,find_contours
im = (imread('shapes.png')[:,:,0]>0).astype(int) #binarize & cast to integer to make it easier to process later
plt.figure(figsize=(15,15))
plt.imshow(im)
plt.show()
# Your code here
Need more help? You can check the following videos:
Coding project - Tumour segmentation¶
The image below is a slice of a brain MRI with a large tumour in it. The goal of this project is to create an algorithm to automatically segment the tumour.
Given that the resolution of the image is of 0.115 cm/px in both axis, estimate the area of the tumour (in cm²).
from skimage.io import imread,imshow
%matplotlib inline
im = imread('mri_brain.jpg')
imshow(im)
<matplotlib.image.AxesImage at 0x7fcad9904510>
# Your code here