A publication describing AlGrow is now available!
For best results view this page in Github Pages to see the embedded demonstration video, or directly on GitHub for simple markdown display.
AlGrow is a software tool for automated image annotation, segmentation and analysis. It was developed by Dr Marcus McHale to support macro-algal disc and plant growth phenotyping in the Plant Systems Biology Laboratory of Dr Ronan Sulpice at the University of Galway.
The easiest way to use the software may be to download a compiled binary from the latest release.
However, if you are familiar with the python environment you can also install the latest release from PyPi.
pip install algrow
Then you should be able to launch from the console:
algrow
For advice on more complex installations, including building binaries please see the build notes
The below video (only visible on Github Pages) is an example of the use of AlGrow, though for more detail you may also benefit from reading the guide.
For the command-line interface, AlGrow must be launched with a complete configuration, either supplied as arguments or in a configuration file. A complete configuration requires both a configured hull and either; a complete layout specification, a fixed layout file or provide False to the –detect_layout argument. e.g. ‘'’algrow -i images -o output -l False’’’
In our experiments, lamina discs of macroalgae are placed in an array under nylon mesh and arranged into cultivation tanks where seawater is circulated and images are captured by RaspberryPi computers. Growth rates are determined from the change in visible surface area of each lamina disc, which is determined using image analysis techniques.
The previous quantification method relied on thresholds in each dimension of a chosen colourspace for image segmentation. Although this strategy is widely used in plant phenotyping, it suffers in less controlled imaging environments where the subject may not always be readily distinguished from background. For example, in our apparatus for Ulva phenotyping, microalgal growth occupies a similar colourspace to the Ulva subject. Similarly, in our apparatus for Palmaria phenotying, leaching pigments can accumulate on the surface of nylon mesh making the distinction of these colours more difficult. These colour gradients also result in poor performance for existing solutions like kmeans clustering (e.g. KmSeg), due to poorly defined decision boundaries.
To allow user supervised definition of the target colour decision boundary, we have developed AlGrow, with an interactive graphical user interface (GUI). In this interface, the pixel colours are presented in a 3-dimensional (3D) plot in Lab colourspace. Colours can be selected by shift-clicking, either in this 3D plot or on the source image. When sufficient colours are selected (>=4), a 3D-hull can be generated, either the convex hull or an alpha hull which permits concave surfaces and disjoint regions.
To automate annotation, we implemented a strategy to detect circular regions of high contrast. This readily detects the subjects in our apparatus, due to high contrast holding rings (now using paint-marker applied to a different apparatus, yet to be reported) but also the circular surface of typical plant pots against a contrasting background. We then cluster these circles into plates/trays and assign indices based on relative positions. Importantly, this method of relative indexation supports movement of plates and trays across an image series, replacing a previously time-consuming process.
Manual adjustment of fixed thresholds for segmentation in ImageJ is time-consuming and can fail to identify or accurately segment across variable subject colours. Manual curation can also introduce operator error and biases to area quantification.
Another key issue with the manual curation pipeline using ImageJ is the requirement to load all images in a stack into memory concurrently. A number of pre-processing steps were employed to handle the scale of data from a single tank for the duration of a typical experiment (1 week) in a single processing step;
The AlGrow application was developed to;
1. Optionally identify the target layout
1.1. A grayscale image is constructed reflecting the distance (delta-E) from the defined circle colour (skimage)
1.2. Canny edge detection and Hough circle transform is applied to indentify target circles
1.3. Circles are clustered to identify groups of defined size (plates) and remove artifact circles, e.g. reflections (Scipy.cluster.hierarchy)
1.4. Orientation and arrangement of circles and plates is considered to assign indexed identities to each circle (Scipy.cluster.hierarchy)
1.5. A layout mask is constructed to restrict further analysis to target areas
2. Determine target subject area
1.1 A boolean mask is determined by pixel colour being within the hull or within delta of its surface
2.2 Small objects (--remove) are removed and filled (--fill) (skimage.morphology)
2.3 Area of the mask within each circle is determined and output to a csv file.
3. Analysis
3.1 RGR is calculated as the slope of a linear fit in log area values (over defined period)
3.2 Figures and reports are prepared
To cite AlGrow, please refer to the associated publication.
This is detailed using the following bibtex:
@article{10.1093/plphys/kiae577,
author = {McHale, Marcus and Sulpice, Ronan},
title = {AlGrow: A graphical interface for easy, fast, and accurate area and growth analysis of heterogeneously colored targets},
journal = {Plant Physiology},
volume = {197},
number = {1},
pages = {kiae577},
year = {2024},
month = {11},
issn = {0032-0889},
doi = {10.1093/plphys/kiae577},
url = {https://doi.org/10.1093/plphys/kiae577},
eprint = {https://academic.oup.com/plphys/article-pdf/197/1/kiae577/60773351/kiae577.pdf}
}