As a natural extension of the existing biomedical image analysis field, an  engineering area is to develop and use various image data analysis and informatics techniques to extract, compare, search and manage the biological knowledge of the respective images.

Image analysis is a detailed examination of the class characteristics, and individualizing characteristics observed in a questioned (also called unknown) image (photograph or video) along with an assessment of any relevant limitations of the imaging technology used to reproduce the image in question.

The image under observation will be analyzed using the image processing software ImageJ.

The image provided a large scale histology of cells which demonstrate the following class characteristics.

  • Proliferating cell nuclei stained differently (in brown)
  • Non-proliferating cell nuclei
  • Cellular debris
  • Cells displaying histological variations represented by heterogeneous stained cell nuclei caused by biological staining

 

The histological variations form a core part of this study, as in early cancer, the detection of which is vital to the patient, the cells will not have changed radically and in more developed cancer the cells will have lost their original form.

 

The field of automatic detection and classification of cells continues to be an area of active research, although classification methods have not been investigated rigorously for segmentation tasks.

 

A variety of algorithms have been developed for automated segmentation of cells and cell nuclei.  All of them appear to have limitations, and their success is restricted by:

  • Illumination correction. Illumination often varies more than 1.5-fold across the field of view, this adds an unacceptable level of noise, obscures real quantitative differences, and prevents many types of biological experiments that rely on accurate fluorescence intensity measurements (for example, DNA content of a nucleus, which only varies by two-fold during the cell cycle).
  • The nature of the images they are applied to.
  • In most biological images, cells touch each other, causing the simple, fast algorithms used in some commercial software packages to fail. The first objects identified in an image (called primary objects) are often nuclei identified from DNA-stained images, although primary objects can also be whole cells, beads, speckles, tumors, and so on.
  • The limited number of cellular structures analyzed per image
  • The homogeneous staining of the tissue
  • The lack of formal validation studies and comparison with other methods or with assessments made by experienced observers.
  • The absence of tissue artifacts
  • The rigid and uniform shape of the structures being detected, something that does not apply to tissue sectioning problems.

The process being considered here will involve the first five stages of the following process.

  • bioimage feature identification,
  • segmentation,
  • registration,
  • annotation,
  • mining,
  • indexing,

Image features are the fundamental description of pixels/voxels and all higher level objects. Useful image features can correspond to statistical, geometrical, morphological properties and frequency of image pixels and regions, as well as the topological relationship of multiple image objects. Almost all bioimage-related studies rely on recognising certain image features. For instance, points, edges, curves, corners, ridges, textures have been considered in analyzing (e.g. tracking) dynamic fluorescence images (Dorn et al., 2008).

The process

Thresholding/Segmentation

 

Convert the RGB image into a gray-scale image: Image -> Type -> 8-bit

 

To add to the accuracy of information for the manual thresholding a Fast Fourier Transform is applied to the 8 bit image which provides information on the occurrences of different pixel values.  Histogram results of performing a FFT on firstly the original colour image and secondly the image converted to an 8-bit are presented in Figure 1..  The two curves present a different shape and a different distribution of values as evidenced by the mean value 66.64 in the original colour value and 105.9 in the converted image.  Suffice to say at this point that this difference may or may not represent a loss of information which may influence the results obtained.

For the purpose of this assignment I will proceed with the 8 bit gray scale image.

Figure 1: Comparsion of histogram information from FFT on colour image and 8-bit grayscale conversion.

image-1

 

To facilitate counting of particles at a later stage, the image needs to be divided into two parts

a)pixels that belong to objects and

  1. b) pixels that do not belong to objects i.e. the background.

This is done by “thresholding” the image by setting all pixels above a certain intensity value (objects) to black and leaving everything else white. This image segmentation can be accomplished either manually or automatically using the ImageJ programme.

 

 

          Figure 2: Automatic Threshold                                                           Manual:  Threshold

 

image-2

The manual thresholding allows for more control over the amount of information included.  Viewing the comparison made between both results from the manual and the automatic threshold, I have decided to proceed with the results from the automatic(Figure3) as there is more clarity in the image which will facilitate better the couting and analysing of particles, and there is no loss of critical information.

Figure 3: Comparison of manual and automatic thresholding with original image

 

image-3

The Background is subtracted leaving a clean image from which, the image is despeckled twice to remove smaller items, before proceeding on to the counting of the particles.

Figure 4: before and after subtracting background and performing watershed once more

image-4

 

 

Figure 5: Particle analysis with measurement at area of 116 pixels.particle analysis

The counting of the particles.  Automatic counting is based on different parameters, in this case the size was set to 116-infinity, with a circularity of 0-1.  There was a total of 119 particles extracted by the process, with an average size of 317.87, in total they take up 18.5% of the area of the sample.

Figure 5 shows the number of nuclei in the image with their outlines displayed.  On comparison with the original image it is apparent that the size is set to high and not all particles of interest are included in the sampling.

 

Figure 6: comparison of analysis with size set to 116 with original image

image6

 

The analyze particles procedure was activated again, this time with measurement set to 20, the results this time were far more representative of a solution to the problem see in Figure 7

.Figure 7: particle analysis with area set to 20 pixels and original image

image7

This analysis produced 243 particles of interest, with an average size of 195.387, taking up 23% of the total area.

 

Figure 8: Graph of perimeters and areas of particles of interest

image8

 Figure 9: Graph of location spread of particles of interest

image9

 

 

 

 

The plot of positions of nulei centres indicates that there particles of interest spread over a wide area of the image.

Conclusion

I would have some concerns that the conversion of the image to grayscale as the first stage in the process,  is not making optimum use of the colour differences introduced by biological staining.  I alluded to this in the essay as the histogram for the 8 bit gray scale was remarkably different to the one for the original image, which indicates that there is possibly some loss of information a this early stage.  Other approaches that could be taken would involve applying image processing techniques before the staining is added, as the biological staining is also adding a colour noise to the image and from my understanding was there to facilitate the human visual system.  It would also be possible to continue with the biological staining and explore distinguishing class characteristics by the use of colour filters.

 

Image segmentation is one of the most basic processing steps in many bioimage informatics applications. While the goal is simply to segment out the meaningful objects of interest in the respective image, this task is non-trivial in many cases. Very complicated cases also exist due to problems such as a low signal–noise ratio and a big variability of image objects. Remarkably, bioimage segmentation strongly depends on the features used. For example, for chromatin composition, texture features can be used, whereas for nuclear morphology, the concavity features may be considered.( Hanchuan Peng 2008)

Practically speaking it seems intuitive to categorize image segmentation methods for molecular and cellular images based on the overall shape of an image object. One class of segmentation problems is to segment globular objects such as nuclei/cells in 2D or 3D images of cell-based assay, where nuclear compartment may be fluorescently labeled for localization of molecules. Several widely used methods, e.g. globular-template-based segmentation, watershed segmentation, Gaussian mixture model estimation and active contour/snake methods, which can be further improved by considering different shape or intensity cues of the objects

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment