Download tool to explore, filter, and download field- and lab-data.
Windows: Download Zip. All systems: Visit our
GitHub page
for installation instructions (all systems)
Video tutorial: YouTube
A small written manual and explanation of the query parameters can be found
here
Lab to Field Translator
A generative adversarial network (GAN) to translate plant-images taken in the lab to look like
they had been taken in the field. Publication pending. Preview:
Equipment
The following equipment is available to TerraByte.
EAGL-I
Gantry-style imaging robot
Phenospex
Multispectral 3D Scanner
Specim FX10
Multispectral Camera
Farmbot Genesis
CNC Farming
DJI Mini SE
Imaging Drones
Field Rover
Semi-autonomous fielddata collector
Growing Chamber
Controlled growing environment
Photogrammetry
3d reconstruction
Contact Us
For inquiries please contact m.ajmani@uwinnipeg.ca
Convolutional neural networks generally perform better on data that is similar to
the data they had been trained on. To ensure that the models we train at TerraByte
perform
under field-conditions we have different strategies at our disposal. A very common one
is
transfer learning: Here a trained model is retrained on data from the new
environment, while carrying over already trained filters for low- to high-level
features.
Another way is to modify our data such that it appears similar
to field-data, although the images had been taken in the greenhouse. For transfer
learning
we collect a large amount of (initially unlabeled)
field data. For the second approach we use image segmentation and background removal to
digitally
"plant" our lab-data in field-like environments.
Date: January 2020
Multispectral 3D Scanning
Applying 3D CNNs to multispectral 3D point models
The multispectral 3D scanning project establishes a framework of generating labeled 3D
imaging datasets of real plants and developing 3D convolutional neural networks to solve
detection, classification and regression problems related to agriculture. The imaging
scanner used in the project is the Phenospex PlantEye, which is
a unique plant sensor
that produces real-time multispectral 3D point models of plants and crops. The output of
the scanner contains detailed information of more than 14 parameters related to the
morphological and spectral specifications of plants such as plant height, 3D leaf area,
leaf inclination, digital biomass and light penetration depth. PlantEye also allows the
visualization of the 3D point models with different spectral indices like NDVI or PSRI,
where each index enables the calculation of multiple measures related to the plant
health,
performance and photosynthesis. We create fully automated systems to conduct data
preprocessing and analysis by developing 3D convolutional neural networks to learn,
interpret and analyze 3D point models of plants in order to monitor plant growth and to
improve plant health, quality and yield.
Date: November 2020 - Today
Hyperspectral Imaging
Getting plants' spectra from IR to UV
Insects see the world differently from humans. While our eyes have only three types of
colour sensitive cells, butterflies for example have four types of colour receptors,
allowing them to see more colours than we do. These insects have evolved this trait to
better find food, mate, and to avoid predators. A device that has a similar superhuman
ability is the hyperspectral camera. It scans the scene at hundreds of different
wavelengths
in the visible and infrared spectrum. In other words, a hyperspectral camera can
distinguish
hundreds of colours of light, way more than the three colours – red, green, and blue –
our
cell phone camera captures. Hyperspectral imaging of plants enables scientists to
discover
plant properties that are invisible to humans. At the University of Winnipeg, we use a
hyperspectral camera to find new ways to visually characterise plants.
Date: November 2020 - Today
The following video shows the reflectance of a plant under different wavelengths. The video
is colorized such that the color of the each image corresponds roughly to each wavelength's
color in the visible part of the spectrum. Note that wavelengths beyond 800 nm (Infrared) are
normally not visible to the human eye. We still gave it a red hue in this video.
Bayesian Neural Networks
Stochastic training algorithms
A stochastic neural network (SNN) is a kind of artificial neural network that has
probabilistic network weights and usually trained under the Bayesian framework.
The SNN provides a set of probability distributions for the network weights instead
of weight points comparing with the regular neural network. This allows SNN to reduce
the risk of overfitting and overconfident for the network prediction. Although
developing an efficient training algorithm is not easy, Bayesian statistics can
provide a good solution to analyze and train the SNN. My research is to explore
stochastic training algorithms for the SNN by using Bayesian methods and solve them
with numerical methods such as the Markov Chain Monte Carlo method and the Sequential
Monte Carlo method.
Date: September 2020 - Today
Field Data Collection
Automated data collection with drones and a rover
We investigate several methods of obtaining field data with different camera systems. From
simply mounting several cameras on a tractor up to drones. We are developing a field rover
with the goal of fully automating the collection of field data and starting first tests
with it in the growing season of 2021. The imaging equipment ranges from normal RGB-cameras
to multi- and hyperspectral cameras.