1 21-BioImage


Previous: 20-ImageBasics.html

21-BioImage/XKCD_artifacts.png

1.1 Biological imaging

Biological imaging may refer to any imaging technique used in biology.

https://en.wikipedia.org/wiki/Biological_imaging
https://en.wikipedia.org/wiki/Bioimage_informatics
https://en.wikipedia.org/wiki/Automated_tissue_image_analysis
https://en.wikipedia.org/wiki/Digital_pathology
https://en.wikipedia.org/wiki/Image_segmentation
https://en.wikipedia.org/wiki/Medical_imaging
https://en.wikipedia.org/wiki/Computer-aided_diagnosis

1.1.1 Medical imaging

https://en.wikipedia.org/wiki/Medical_imaging

1.1.2 Bioimage informatics

https://en.wikipedia.org/wiki/Bioimage_informatics

21-BioImage/pasted_image.png
Fluorescent image of a cell in telophase.
Multiple dyes were imaged and are shown in different colours.

21-BioImage/pasted_image006.png
Microscopic view of a histologic specimen of human lung tissue,
stained with hematoxylin and eosin.

Overview of a typical process:
21-BioImage/process.png

1.2 Types of problem

1.2.1 Subcellular Location Analysis

1.2.2 Automated tissue image analysis

1.2.3 High-Content Screening

1.2.4 Segmentation (or labeling)

21-BioImage/pasted_image008.png
Volume segmentation of a 3D-rendered CT scan of the thorax:
The anterior thoracic wall, the airways and the pulmonary vessels anterior to the root of the lung have been digitally removed in order to visualize thoracic contents:
* blue: pulmonary arteries
* red: pulmonary veins (and also the abdominal wall)
* yellow: the mediastinum
* violet: the diaphragm

1.2.5 Tracking

21-BioImage/tracking.jpg

1.2.6 Registration

Segment, then register:
21-BioImage/pasted_image013.png

21-BioImage/pasted_image014.png
21-BioImage/pasted_image015.png

1.2.7 Classification/diagnosis

Imaging methods:
21-BioImage/pasted_image011.png

Breast cancer image (e.g., a target classification)
21-BioImage/pasted_image012.png

1.2.8 Reconstruction

Taking sequential microscope image slices and reconstructing a 3D view on tissue.
21-BioImage/pasted_image010.png
21-BioImage/pasted_image009.png

+++++++++++ Cahoot-20-2
https://umsystem.instructure.com/courses/97521/quizzes/298142

1.3 Common bioimage informatics methods and their applications

Many of these methods are general,
but are particularly common in bioimage analysis.

1.3.1 Overview

21-BioImage/bio0.png
A conceptual pipeline.
The specimen is imaged using any of today’s microscopes,
modeled by the input image f(v) passing through the blocks of
PSF (properties of the microscope, described by convolution with h(v)) and
A/D conversion (analog-to-digital conversion, effects of sampling and digitization together with uncertainty introduced by various sources of noise),
producing a digital image gn.
That digital image is then restored either via a
de-noising followed by de-convolution, or via
joint de-noising/de-convolution,
producing a digital image fn .
Various options are possible, the image could go through a
registration/mosaicing processing, producing rn,
segmentation/tracing/tracking, producing sn,
data analysis/modeling/simulations block, with the output yn.
At the input/output of each block,
one can join the pathway to either skip a block,
or send feedback to previous block(s) in the system.

21-BioImage/b01.png
Stages in image analysis illustrated using algal image:
* (a) detail from image;
* (b) same detail after application of 5 × 5 moving median filter;
* (c) histogram, on a square root scale, of pixel values after filtering, with an arrow to indicate the threshold;
* (d) result of thresholding image at pixel value 120, to produce a binary image;
* (e) result of applying morphological opening to (d);
* (f) separated objects in (e) counted.

1.3.2 Bioimage formats

21-BioImage/bio1.png
* Images viewed as matrices.
* The overview is not meant to be exhaustive, but reflects some of the more frequently used modes of image acquisition in biological and medical imaging, where the number of dimensions is typically one to five, with each dimension corresponding to an independent physical parameter:
* three to space,
* one (usually denoted t) to time, and
* one to wavelength, or color, or more generally to any spectral parameter (we denote this dimension s here).
* In other words, images are discrete functions, I (x, y, z, t, s),
* with each set of coordinates yielding the value of a unique sample
* (indicated by the small squares, the number of which is obviously arbitrary here).
* Note that the dimensionality of an image (indicated in the top row) is given by the number of coordinates that are varied during acquisition.
* To avoid confusion in characterizing an image, it is advisable to add adjectives indicating which dimensions were scanned, rather than mentioning just dimensionality.
* For example, a 4D image may either be a spatially 2D multi-spectral time-lapse image, or a spatially 3D time-lapse image.

1.3.3 Major terms

21-BioImage/bio2.png
Image processing
* takes an image as input and produces a modified version of it
* (in the case shown, the object contours are enhanced using an operation known as edge detection, described in more detail elsewhere in this booklet).

Image analysis
* concerns the extraction of object features from an image.

Computer graphics
* is the inverse of image analysis:
* it produces an image from given primitives, which could be numbers (the case shown), or parameterized shapes, or mathematical functions.

Computer vision
* aims at producing a high-level interpretation of what is contained in an image; this is also known as image understanding.

Visualization
* transforms higher-dimensional image data into a more primitive representation to facilitate exploring the data.

1.3.4 Intensity transformation

21-BioImage/bio3.png
* Examples of intensity transformations based on a global mapping function:
* contrast stretching, intensity inversion, intensity thresholding, and histogram equalization.
* The top row shows the images used as input for each of the transformations.
* The second row shows for each image the mapping function used (denoted M), with the histogram of the input image shown in the background.
* The bottom row shows for each input image the corresponding output image resulting from applying the mapping function:
* O( x, y ) = M( I (x, y )).
* It is clear from the mapping functions that contrast stretching and histogram equalization both distribute the most frequently occurring intensities over a wider range of values, thereby increasing image contrast.
* The former transformation is suitable in the case of uni-modal histograms, whereas the latter is particularly suited for images having multi-modal histograms.

Check out in class:
https://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py

1.3.5 Local image filtering

  1. Linear filtering operations:
  2. Nonlinear filtering operations

1.3.5.1 Linear

https://en.wikipedia.org/wiki/Kernel_(image_processing)
21-BioImage/bio4.png
* Principles and examples of convolution filtering.
* The value of an output pixel is computed as a linear combination (weighing and summation) of the value of the corresponding input pixel and of its neighbors.
* The weight factor assigned to each input pixel is given by the convolution kernel (denoted K).
* In principle, kernels can be of any size.
* Examples of commonly used kernels of size 3 × 3 pixels include
* the averaging filter,
* the sharpening filter, and
* the Sobel x- or y-derivative (approximation) filters.
* The Gaussian filter is often used as a smoothing filter.
* It has a free parameter (standard deviation σ) which determines the size of the kernel (usually cut off at m = 3σ) and therefore the degree of smoothing.
* The derivatives of this kernel are often used to compute image derivatives at different scales, as for example in Canny edge detection.
* The scale parameter, σ, should be chosen such that the resulting kernel matches the structures to be filtered.

Local filters
http://scipy-lectures.org/advanced/image_processing/index.html#image-filtering
http://scipy-lectures.org/packages/scikit-image/#local-filters

1.3.5.2 Non-linear

Code to show:
https://gitlab.com/bio-data/computer-vision/bio_images/morphology.py

The basic morphological operators are erosion, dilation, opening and closing.
https://en.wikipedia.org/wiki/Mathematical_morphology

https://en.wikipedia.org/wiki/Erosion_(morphology)
If when centered on a pixel,
any of kernel does not match,
set all in kernel to value of middle pixel.
The erosion of a point is the minimum of the points in its neighborhood,
with that neighborhood defined by the structuring element.
Erosion = minimum filter.
Replace the value of a pixel by the minimal value covered by the structuring element.

Suppose:
A is a 13 x 13 matrix, and
B is a 3 x 3 matrix:
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1

Assuming that the origin B is at its center,
for each pixel in A superimpose the origin of B,
if B is completely contained by A,
then the pixel is retained,
else deleted.

Therefore the Erosion of A by B is given by this 13 x 13 matrix:
0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 0 0 0 1 1 1 1 0
0 1 1 1 1 0 0 0 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0

This means that only when B is completely contained inside A,
that the pixels values are retained,
otherwise it gets deleted or eroded.

https://en.wikipedia.org/wiki/Dilation_(morphology)
If when centered on a pixel,
any pixel does not match the kernel,
set all pixels in element to opposite of that pixel.
Dilation = maximum filter.
Replace the value of a pixel by the maximal value covered by the structuring element.

Suppose:
A is the following 11 x 11 matrix, and
B is the following 3 x 3 matrix:
0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 0 0 1 1 1 0
0 1 1 1 1 0 0 1 1 1 0
0 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 0 1 1 1
0 1 1 0 0 0 1 1 1 1 0 1 1 1
0 1 1 0 0 0 1 1 1 1 0 1 1 1
0 1 1 0 0 0 1 1 1 1 0
0 1 1 1 1 1 1 1 0 0 0
0 1 1 1 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0

For each pixel in A that has a value of 1, superimpose B,
with the center of B aligned with the corresponding pixel in A.
Each pixel of every superimposed B is included in the dilation of A by B.
The dilation of A by B is given by this 11 x 11 matrix:
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 0 0
1 1 1 1 1 1 1 1 1 0 0

https://en.wikipedia.org/wiki/Opening_(morphology)
Opening - the dilation of the erosion

https://en.wikipedia.org/wiki/Closing_(morphology)
Closing - the erosion of the dilation

See ndimage api for these methods:
http://scipy-lectures.org/intro/scipy.html#mathematical-morphology
http://scipy-lectures.org/advanced/image_processing/index.html#mathematical-morphology
http://scipy-lectures.org/packages/scikit-image/#mathematical-morphology

21-BioImage/bio5.png
* Principles and examples of binary morphological filtering.
* An object in the image is described as the set (denoted X) of all coordinates of pixels belonging to that object.
* Morphological filters process this set using a second set, known as the structuring element (denoted S).
* Here the discussion is limited to structuring elements that are symmetrical with respect to their center element, s = (0, 0), indicated by the dot.
* dilation of X is defined as
* the set of all coordinates x for which the cross section of S placed at x (denoted Sx) with X is not empty
* The erosion of X is defined as
* the set of all x for which Sx is a subset of X.
* A dilation followed by an erosion (or vice versa) is called a closing (versus opening).
* These operations are named after the effects they produce, as illustrated.
* Many interesting morphological filters can be constructed by taking differences of two or more operations, such as in morphological edge detection.
* Other applications include
* skeletonization, which consists in a sequence of thinning operations producing the basic shape of objects, and
* granulometry, which uses a family of opening operations with increasingly larger structuring elements to compute the size distribution of objects in an image.

21-BioImage/b00.png
Turbinate image:
(a) as printed,
(b) after enhancement.

+++++++++++ Cahoot-20-3
https://umsystem.instructure.com/courses/97521/quizzes/298143

1.3.6 Geometric transformations

  1. coordinate transformation and
  1. image re-sampling.

21-BioImage/bio6.png
Geometrical image transformation by:

coordinate transformation
* Is concerned with how input pixel positions are mapped to output pixel positions.
* Many types of transformations (denoted T) exist.
* The most frequently used types are (in increasing order of complexity)
* rigid transformations (translations and rotations),
* affine transformations (rigid transformations plus scalings and skewings), and
* curved transformations (affine transformations plus certain nonlinear or elastic deformations). These are defined (or can be approximated) by polynomial functions (with degree n depending on the complexity of the transformation).

image re-sampling
* concerns the computation of the pixel values of the output image (denoted O) from the pixel values of the input image (denoted I).
* This is done by using the inverse transformation (denoted T−1) to map output grid positions (x0, y0) to input positions (x, y).
* The value at this point is then computed by interpolation from the values at neighboring grid positions, using a weighing function, also known as the interpolation kernel (denoted K).

1.3.7 Image restoration

21-BioImage/bio7.png
* Examples of the effects of image restoration operations:
* background subtraction,
* noise reduction, and
* deconvolution.
* Intensity gradients may be removed by subtracting a background image.
* In some cases, this image may be obtained from the raw image itself by mathematically fitting a polynomial surface function through the intensities at selected points (indicated by the squares) corresponding to the background.
* Several filtering methods exist to reduce noise:
* Gaussian filtering blurs not only noise but all image structures.
* Median filtering is somewhat better at retaining object edges but has the tendency to eliminate very small objects (compare the circles in each image).
* The magnitude of these effects depends on the filter size.
* Non-linear diffusion filtering was designed specifically to preserve object edges while reducing noise.
* De-convolution methods aim to undo the blurring effects of the microscope optics and to restore small details.
* More sophisticated methods are also capable of the appearance of reducing noise.
* Whether noise itself is truly reduced, is another issue.

1.3.8 Co-localization analysis

An interesting question in many biological studies is:
To what degree two or more molecular objects (typically proteins) are active in the same specimen?
21-BioImage/bio8.png
* Commonly used measures for quantitative co-localization analysis
* The aim of all these measures is to express in numbers, the degree of overlap between two fluorophores (captured in well separated channels), indicating the presence of the corresponding labeled molecules in the same or proximate physical locations (up to the optical resolution of the microscope).
* A visual impression of the co-occurrence of fluorophore intensities (I1 and I2) is given by the joint histogram
* (also referred to as the scatter plot or fluorogram).
* Some co-localization measures are:
* computed over the entire images,
* restricted to certain intensity ranges (indicated by the squares in the joint histograms).
* Among the first are
* Pearson’s correlation coefficient (denoted rP) and
* the so-called overlap coefficient (denoted r and computed from the sub-coefficients k1 and k2).
* Both coefficients are insensitive to intensity scalings (due to photo-bleaching or a difference in signal amplification), while the former is also insensitive to intensity offsets (different background levels).
* The value of rP may range from −1 to 1, and is therefore at odds with intuition.
* Its squared value is perhaps more valuable as it expresses the quality of a least-squares fitting of a line through the points in the scatter plot.
* The other measures range from 0 to 1.
* The value of r is meaningful only when the amount of fluorescence is approximately equal in both channels, that is when k1 and k2 have similar values.
* Manders’ co-localization coefficients (denoted m1 and m2) are intuitively most clear, but require careful separation of signal and background in both channels:
* the denominators are computed over the entire images, but the numerators sum only those intensities in one channel for which the corresponding intensity in the other channel is within a predefined range
* (the left and right and the top and bottom lines of the square region indicated in the joint histogram, for I1 and I2 respectively).

1.3.9 Neuron tracing and quantification

21-BioImage/bio9.png
* Tracing of neurite outgrowth using interactive segmentation methods.
* To reduce background intensity gradients (shading effects) or discontinuities (due to the stitching of scans with different background levels), the image features exploited here are the second-order derivatives, obtained by convolution with the second-order Gaussian derivative kernels at a proper scale (to suppress noise).
* These constitute a so-called Hessian matrix at every pixel in the image.
* Its eigenvalues and eigenvectors are used to construct an ellipse (as indicated), whose size is representative of the local neurite contrast and whose orientation corresponds to the local neurite orientation.
* In turn, these properties are used to compute a cost image (with dark values indicating a lower cost and bright values a higher cost) and vector field (not shown), which together guide a search algorithm that finds the paths of minimum cumulative cost between a start point and all other points in the image.
* By using graphics routines, the path to the current cursor position (indicated by the cross) is shown at interactive speed while the user selects the optimal path based on visual judgment.
* Once tracing is finished, neurite lengths and statistics can be computed automatically.
* This is the underlying principle of the NeuronJ tracing tool, freely available as a plugin to the ImageJ program.

1.3.10 Particle detection and tracking, and Cell tracking

Particle tracking methods consist of two stages
1. the detection of individual particles per time frame, and
2. the linking of particles detected in successive frames

21-BioImage/bio10.png
Challenges in particle and cell tracking.
* Regarding particle tracking, currently one of the best approaches to detection of fluorescent tags is by least-squares fitting of a model of the intensity distribution to the image data.
* Because the tags are sub-resolution particles, they appear as diffraction-limited spots in the images and therefore can be modeled well by a mixture of Gaussian functions, each with its own amplitude scaling factor, standard deviation, and center position.
* Usually the detection is done separately for each time step, resulting in a list of potential particle positions and corresponding features, to be linked between time steps.
* The linking is hampered by the fact that the number of detected particles may be different for each time step.
* In cell tracking, a contour model (surface model in the case of 3D time-lapse experiments) is often used for segmentation.
* Commonly used models consist of control points, which are interpolated using smooth basis functions (typically B-splines) to form continuous, closed curves.
* The model must be flexible enough to handle geometrical as well as topological shape changes (cell division).
* The fitting is done by (constrained) movement of the control points to minimize some predefined energy functional computed from image-dependent information (intensity distributions inside and outside the curve) as well as image-independent information (a-priori knowledge about cell shape and dynamics).
* Finally, trajectories can be visualized by representing them as tubes (segments) and spheres (time points) and using surface rendering.

Reminder:
21-BioImage/bio2.png

1.3.11 Visualization

Several visualization methods exist, which vary in efficiency:

1.3.11.1 Ray

1.3.11.2 Surface rendering

21-BioImage/bio11.png
* Visualization of volumetric image data using volume rendering and surface rendering methods.
* Volume rendering methods do not require an explicit geometrical representation of the objects of interest present in the data.
* A commonly used volume rendering method is ray casting: for each pixel in the view image, a ray is cast into the data, and the intensity profile along the ray is fed to a ray function, which determines the output value, such as the maximum, average, or minimum intensity, or accumulated “opacity” (derived from intensity or gradient magnitude information).
* By contrast, surface rendering methods require a segmentation of the objects (usually obtained by thresholding), from which a surface representation (triangulation) is derived, allowing very fast rendering by graphics hardware.
* To reduce the effects of noise, Gaussian smoothing is often applied as a pre-processing step prior to segmentation.
* As shown, both operations have a substantial influence on the final result: by slightly changing the degree of smoothing or the threshold level, objects may appear (dis)connected while in fact they are not.
* Therefore it is recommended to establish optimal parameter values for both steps while inspecting the effects on the original image data rather than looking directly at the renderings.

1.4 General vision methods: edges and gradients

1.4.1 Low-level feature detecting algorithms

https://en.wikipedia.org/wiki/Feature_detection_(computer_vision)
21-BioImage/pasted_image021.png

1.4.1.1 Edges

https://en.wikipedia.org/wiki/Edge_detection
21-BioImage/pasted_image029.png

Canny:
https://en.wikipedia.org/wiki/Canny_edge_detection

Example problem:
https://en.wikipedia.org/wiki/Sobel_operator
21-BioImage/pasted_image018.png

How to detect edges?
Edge detection: zoom in 1 row
21-BioImage/pasted_image019.png
Of one row:
intensity,
derivative, and
smoothed derivative

Do this with rows, then cols, and you get edges

Sobel does x-derivative approx and x-derivative approx
(Convolution kernels above)

+++++++++++ Cahoot-20-4
https://umsystem.instructure.com/courses/97521/quizzes/298144

http://scipy-lectures.org/advanced/image_processing/index.html#edge-detection
(sobel)
21-BioImage/pasted_image020.png

1.4.1.2 Gradients

21-BioImage/pasted_image022.png

1.4.1.3 Histogram of oriented gradients (HOG)

21-BioImage/pasted_image024.png
21-BioImage/pasted_image025.png

Feature histogram
21-BioImage/pasted_image026.png

21-BioImage/pasted_image028.png

HOGles
21-BioImage/pasted_image027.png
* Middle row displays roughly what the computer with HOG might see.
* Top row is HOG.
* Doing HOG first helps to scan/detect objects (a form of feature extraction)

1.5 Labeling and segmentation

What is segmentation on biological images?
21-BioImage/segmentation_fault.jpg

How do you label all of the cell or nucleus data in these images with the same algorithm?
Below are images from your next assignment:
../../../../index/Classes/Bioinformatics/Content.html
(watch video in class)

21-BioImage/0a849e0eb15faa8a6d7329c3dd66aabe9a294cccb52ed30a90c8ca99092ae732.png 21-BioImage/0e132f71c8b4875c3c2dd7a22997468a3e842b46aa9bd47cf7b0e8b7d63f0925.png
21-BioImage/0ed3555a4bd48046d3b63d8baf03a5aa97e523aa483aaa07459e7afa39fb96c6.png 21-BioImage/1ef68e93964c2d9230100c1347c328f6385a7bc027879dc3d4c055e6fe80cb3c.png
21-BioImage/0f1f896d9ae5a04752d3239c690402c022db4d72c0d2c087d73380896f72c466.png 21-BioImage/1cdbfee1951356e7b0a215073828695fe1ead5f8b1add119b6645d2fdc8d844e.png
21-BioImage/3c4c675825f7509877bc10497f498c9a2e3433bf922bd870914a2eb21a54fd26.png 21-BioImage/1d9eacb3161f1e2b45550389ecf7c535c7199c6b44b1c6a46303f7b965e508f1.png

1.5.1 Segmentation theory

1.5.1.1 Segmentation by histogram thresholding

https://en.wikipedia.org/wiki/Otsu’s_method
In Otsu’s method we exhaustively search for the threshold that minimizes the intra-class variance (the variance within the class), defined as a weighted sum of variances of the two classes
The algorithm assumes that the image contains two classes of pixels following bi-modal histogram (foreground pixels and background pixels), it then calculates the optimum threshold separating the two classes so that their combined spread (intra-class variance) is minimal, or equivalently (because the sum of pairwise squared distances is constant), so that their inter-class variance is maximal.
21-BioImage/otsu_lung.jpg

https://en.wikipedia.org/wiki/Balanced_histogram_thresholding
This method weighs the histogram, checks which of the two sides is heavier, and removes weight from the heavier side until it becomes the lighter.
It repeats the same operation until the edges of the weighing scale meet.
21-BioImage/spyder.jpeg21-BioImage/spyder_thresh.jpeg

1.5.1.2 Segmentation with clustering algorithms

Recall k-means
1. Pick K cluster centers, either randomly or based on some heuristic method, for example K-means++
2. Assign each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster center
3. Re-compute the cluster centers by averaging all of the pixels in the cluster
4. Repeat steps 2 and 3 until convergence is attained (i.e. no pixels change clusters)

Original image
21-BioImage/polar.jpg

k-means on that image’s X-features (pixel values)
21-BioImage/polar_kmeans.png
* Recall/consider?
* What is distance?
* How were colors chosen?

1.5.1.3 Segmentation via edge detection

https://en.wikipedia.org/wiki/Edge_detection
* Edge detection is a well-developed field on its own within image processing.
* Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries.
* Edge detection techniques have therefore been used as the base of another segmentation technique.
* Segmentation methods can also be applied to edges obtained from edge detectors.
21-BioImage/edges.png

1.5.1.4 Many many more segmentation methods

https://en.wikipedia.org/wiki/Image_segmentation

1.5.2 Python demos to go over in class

1.5.2.1 scipy from scipy-lectures

May be more out-of-date on some functions
* http://scipy-lectures.org/intro/scipy.html#connected-components-and-measurements-on-images
* http://scipy-lectures.org/advanced/image_processing/index.html#segmentation
* (based on threshold and more)
* http://scipy-lectures.org/advanced/image_processing/index.html#measuring-objects-properties-ndimage-measurements

1.5.2.2 scipy docs

1.5.2.3 scikit-image from scipy-lectures

May be more out-of-date on some functions
* http://scipy-lectures.org/packages/scikit-image/#image-segmentation (based on threshold and more)
* For example: http://scipy-lectures.org/packages/scikit-image/auto_examples/plot_labels.html#sphx-glr-packages-scikit-image-auto-examples-plot-labels-py (in class review)
* http://scipy-lectures.org/packages/scikit-image/#measuring-regions-properties (for labeling)
* http://scipy-lectures.org/packages/scikit-image/#data-visualization-and-interaction

1.5.2.4 scikit-image docs

1.5.3 How to judge accuracy of a labeling

https://en.wikipedia.org/wiki/Jaccard_index
Recall the rand index we covered previously!

Jaccard index, also known as Intersection over Union (IOU),
and the Jaccard similarity coefficient
(originally given the French name coefficient de communauté by Paul Jaccard),
is a statistic used for gauging the similarity and diversity of sample sets.
The Jaccard coefficient measures similarity between finite sample sets,
and is defined as the size of the intersection,
divided by the size of the union of the sample sets:

$J(A,B)={{|AB|} }={{|AB|} } $

If A and B are both empty,
we define J(A,B) = 1

\(0\leq J(A,B)\leq 1\)

Union —————————— Intersection
21-BioImage/Union_of_sets_A_and_B.png21-BioImage/Intersection_of_sets_A_and_B.png
21-BioImage/Intersection_over_Union_object_detection_bounding_boxes.jpg
21-BioImage/Intersection_over_Union_poor_good_and_excellent_score.png
21-BioImage/Intersection_over_Union_visual_equation.png

1.6 Intro on nuclei from data science bowl

https://kaggle.com/competitions/data-science-bowl-2018/stage1/teaching-notebook-for-total-imaging-newbies.py
https://kaggle.com/competitions/data-science-bowl-2018/stage1/basic-skimage-nuclei.py