CV and Image Processing for the PFC


#1

Hi guys,

This is Eddie, one of the most recent members of the OpenAG community.

I was wondering what would happen if we included some image processing capabilities into the PFC. What kind of benefits and limitations we would get from the synergy of both concepts?

Computer vision and image processing can give us some interesting features in terms of plant optimization and machine learning. For instance, Kinect sensors could help us determine interesting quality metrics such as plant biomass. However, calibration issues, computing-intensive algorithms or turning the PFC into a more complex device might be several drawbacks for future OpenAG farmers. In case you want to take a look at some literature: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3649362/pdf/sensors-13-02384.pdf

What do you think guys? All opinions are more than welcome …


Community Developed Mods
#2

It make some sense.
For me the recipe system is a bit weak, and there is not enough control loop.
The set point for the different loop are set on a time based, not on a grow based.
Integrating some vision system in it, would bring some optimisation in the system and ensure optimal growth.


#3

Do you think a simple web cam feeding a Computer Vision (CV) algorithm would be enough? Or you think that a PCD (Point Cloud Device) such as Kinect might provide more accuracy and future options?


#4

Hi guys,

I just wanted to give you a quick update about the CV and Image Processing research we are currently conducting at OpenAg. During the first months of the project, we developed several CV algorithms to measure plant width and height. Even though, these algorithms are still under heavy development (e.g., to accommodate different types plant canopy and structures), you can find early implementations of them in the openag_cv repo. These CV algorithms have the intention to become an advanced sensing mechanism for the PFC control loop:

CV algorithms like these are of special interest to us due to their non-invasive aspect (i.e., we don’t need to harvest the plant to obtain useful metrics). For this purpose, we decided to base all our developments in OpenCV and ROS. First, because of the large communities behind these projects. Second, due to the easy integration with the PFC control software (already using ROS).

In the last few weeks, we decided to take a step forward in this research and tackle the problem of leaf detection and segmentation. We integrated several simple algorithms such as blob detectors, adaptive thresholds, etc. (all integrated in OpenCV) for this purpose:

However, we are currently exploring new approaches to tackle this problem. For instance, we have a strong interest in the possibilities of CNN to outperform our initial results or the integration of dedicated APIs such as PlantCV with our system. This research is still in a very early stage, that is why we are more than happy to hear your ideas, thoughts and suggestions.

Regards,
Eddie


#5

@Eddie Could you say more about how ROS contributes to your ability to use OpenCV?

I’ve been skeptical about the value ROS provides, but maybe I’m overlooking benefits for computer vision. Suppose that instead of using ROS, you had a cron job to put timestamped jpegs in a folder. Would the absence of ROS negatively affect your ability to use OpenCV?


#6

Hi @wsnook,

Let me answer your questions in the following lines:

Would the absence of ROS negatively affect your ability to use OpenCV?

No. However, it would be more complicated to interface the information we obtain from the camera (i.e., OpenCV) with the control mechanisms of the PFC. In other words, we envision a system that can recognise events in the camera feed and adapt the climate accordingly. This would be much more complicated without the synergy of ROS + OpenCV.

I am wondering why are you skeptic about ROS …

Regards,
Eddie


#7

From what I read, people spend a lot of time struggling to get it installed, configured, and doing basic tasks like turning lights on and off or displaying sensor values. I tried working with ROS, but it was a hassle compared to other simpler tools, so I stopped.

It seems like ROS makes the most sense for robotic systems with a high level of inherent complexity–many sensors, many actuators, and lots of rapid activity–that need powerful real time controllers. My sense is that OpenAg has been incurring a large opportunity cost by trying to run ROS on lower end hardware than it’s designed for and in a role that simpler tools could fill. Plants don’t move very fast.

For context and comparison, consider the many plant-watering and greenhouse automation projects using Arduino, ESP266, Photon, etc. that people post about on Adafruit, Hackaday, Instructables, etc.

[edit: I’m curious about the potential for putting all the low level control loop logic on a more modern MCU like Photon (120MHz ARM Cortext M3 instead of the Arduino’s 16MHz ATmega2560) and adjusting the control parameters over USB serial from a python script or something.]


#8

@Eddie I think part of the reason for our curiosity about integrating to your algorithms in OpenCV is for using them on the MVP. I would love to utilise the camera to trigger events based on the plant being observed. For example, when the first of a seedling is spotted a user receives a notification explaining the process of germination to them. This would also give us a way to judge a user’s success with a recipe without them having to meticulously record any results (weigh the plants, measure, provide comments).

I’m very curious if and how these decisions we make regarding the MVP will have an impact on whether or not the algorithms work.

  • Distance - Can the same software be used with one camera placed 6 inches above 1 plant, and another 24 inches above a dozen plants?
  • Hardware - Minimum megapixels, wide-angle, does it have to be a USB camera?

How feasible is it to start using CV to tell us other things about the plants. I know these are all very far out dreams, but I’m curious if you think they have potential down the road.

  • Lifecycle - Alert when seed germinates, flower formation.
  • Disease Detection - Notice changing colors on leaves, mold formation, perhaps even pests
  • Sensor Replacement - I spoke with you at the Media Lab about this concept quite some time ago for measuring PH. I’m curious if we could ever use a camera to read the PH of a manual PH test/strip like in this study. I think this would be a really cool way a school who often times prefers to have some sort of manual interaction to also have a simple, and yet accurate method of data entry (leave strip in a specific spot on reservoir for camera to read later). I can elaborate more if this is confusing, I’d love to more about the direction you guys are going in and what you see the potential and quick wins to be especially in terms of the MVP because that will have minimal sensing abilities to begin with.

#9

Hi Peter,

Sorry for the late reply. Let me answer your questions in the following lines:

  • Distance - Can the same software be used with one camera placed 6 inches above 1 plant, and another 24 inches above a dozen plants?

Yes! that is possible. However, in terms of computational cost detecting a dozen plants and following their growth would require more computer power, memory, etc. At the moment, this is still OK within the capabilities of the Raspberry Pi. Nevertheless, is something to think about when you design system with low-cost specifications.

  • Hardware - Minimum megapixels, wide-angle, does it have to be a USB camera?

I have the impression that cameras with 1 ~ 2 Megapixels would work OK (didn’t test it though). However, cameras with 3 ~ 4 Megapixels would be able to detect more information, which is interesting when you are trying to analyse plant health, topology, etc. Wide-angle cameras are good when you have a big tray (rectangle-shaped) with plants. However, for the PFC a normal camera would work fine. In our case, we just wanted to have a consistency in the hardware we used for all our devices (PFC, Food Server, etc.). USB cameras are cheap and convenient. In addition, they are very easy to integrate with ROS. I am wondering what other types of cheap cameras are you considering?

How feasible is it to start using CV to tell us other things about the plants. I know these are all very far out dreams, but I’m curious if you think they have potential down the road.

  • Lifecycle - Alert when seed germinates, flower formation.

There is already code in the openag_cv repo that would measure the plant width and count its leaves using an OpenCV “blob detector function”. It is a piece of code you can customise for your own plants. However, this is still experimental code. For instance, we need to integrate it in the “official” branch of the PFC code.

  • Disease Detection - Notice changing colors on leaves, mold formation, perhaps even pests

Yes! The possibility exists. However, we are exploring novel approaches of how to solve this problem. You should take a look at this master thesis: http://scholar.colorado.edu/csci_gradetds/124/

  • Sensor Replacement - I spoke with you at the Media Lab about this concept quite some time ago for measuring PH. I’m curious if we could ever use a camera to read the PH of a manual PH test/strip like in this studyhttp://www.iaeng.org/IJCS/issues_v38/issue_3/IJCS_38_3_11.pdf. I think this would be a really cool way a school who often times prefers to have some sort of manual interaction to also have a simple, and yet accurate method of data entry (leave strip in a specific spot on reservoir for camera to read later). I can elaborate more if this is confusing, I’d love to more about the direction you guys are going in and what you see the potential and quick wins to be especially in terms of the MVP because that will have minimal sensing abilities to begin with.

We started a project about how to measure pH with computer vision techniques. Right now, we are still evaluating the conclusions of this study. We took a different approach from the paper your mention. I can give you more details in a couple of weeks.

Regards,
Eddie.


#10

Thanks for the thoughtful reply. I don’t mind waiting so long as it comes eventually :slight_smile:.

I’d be interested to give this a try. Is there any documentation available on how to integrate this section of code? Or perhaps that’s in the pipeline and I just need to be patient. I would however like to start to understand a bit of what is actually happening, and have two operational V2 PFC’s I could try to get the code to work on. If not, I was thinking about exploring using “Easy-Leaf-Area” which is an open source application for Android. Really my goal is to find a reliable method of collecting data about the results of an experiment, not just recording the variables.

Mostly the Raspberry Pi Camera (8 Meg $30).

People have also brought up the idea of just using a phone to take pictures and then send/save them to a repository. To me, that seems much more difficult, especially from what you said about computing power, as it sounds like all the CV is done at the local (Pi) level. I do know we’ve talked about using the Pi Zero for the MVP as well, which from what you’re saying may not have enough power.

Very interested in hearing more about the outcomes of this. I also have ideas for measuring water level & temperature based upon the camera.

I am located in St. Louis and the Danforth Plant Science Center has built PlantCV which utilizes OpenCV as well (I encourage you to check out their GitHub). I am planning to try and meet with some of their researchers to discuss how they’re using the platform and what phenotyping metrics/methods they have found best suited to CV. Quite frankly, they’ve had people working on their platform for years (they have a few thousand Pi’s with cameras in action). By approaching CV with more Plant Science experience as well they may have very valuable insights and be able to save us quite a lot time.


#11

@Webb.Peter Would be really interesting to find out more on what the outcome on this meeting with the PlantCV. We would like to start working with this once we have built our FC v2 in Stockholm.

Thanks


#12

@Webb.Peter

Hello, Peter. I like your ideas about detecting Disease Detection and Measuring PH using CV.

Can you sharing about your idea?

Above image is our recent computer vision image. I want detect disease using openCV through our Images.
Would you start collect a number of image of plant image, and give a label?( Probably, for divide Healthy or Disease Status for Machine Learning Process- )

I have a curiosity your idea : )


#13

I too am very interested in quantifying disease symptoms (particularly chlorosis) with OpenCV, via PlantCV.
There is a fair amount of academic literature on plant color analysis.
What kind of disease(s) are you interested in? Biotic or abiotic stress?


#14

@al_payi @house @jshoyer

@webbhm has been working on the CV side of the MVP. He is working with VIS and NIR cameras, I’ll let him elaborate on this further.

With regards to PlantCV - they have excellent documentation and are definitely worth investigating. Here is their primary documentation: http://plantcv.readthedocs.io/en/latest/

They are using multiple forms of cameras, and most of their work is intended for “High-Throughput Phenotyping.”

image

As it relates to PH, I’m really interested in talking to someone at one of these companies to make a hydroponics test. It would be really nice to just get a whole “strip” wet and then take a picture (or leave it on top of reservoir for camera to take image) and have it record things like NPK as well as PH: https://www.amazon.com/dp/B00R5S9EQ6 @jimbell


#15

CV is one of those things that I consider ‘not ready for prime time’. It works, but either not consistently, or requires a lot of configuration for a particular environment; not something that anyone can just ‘turn on’ and it works. The latter is the goal for what I feel comfortable releasing as a standard part of the MVP. I am convinced we will get it to work, the question is in what time frame.
The majority of people working with CV are using the Python OpenCV library. It contains a lot of the ‘building blocks’ of CV and greatly simplifies the work. PlantCV (]from the Danforth Foundation) is probably the best code out there, but they have wrappered the OpenCV for high throughput, which adds overhead and complexity that I don’t think the MVP is ready for; that is why I use them for guidance, but work directly with the OpenCV library for my experimenting. A key, point of PlantCV is controlling the context - not trying to use images of plants in a garden, but creating a controlled photo booth where background, lighting and positioning are controlled constants. I am trying to move the MVP in that direction, but at this time we are not there. Our lights were chosen for plant growth, not photography; and the size of the box is not the best for camera placement. Then there is the mylar!!! Try identifying a plant when it has multiple reflections all over the place!!!
I have spent a fair bit of time trying ‘stereo pair’ photography, using two cameras to create 3D images (often point clouds) that could determine the size of a plant. Our goal needs to be data extraction (height, width, volume). I have some nice pictures, but not something that gives accurate measurements (especially when there are 6 plants in the MVP in different positions). I think it would be simpler to get a top picture and a side picture (using a mirror?) with rulers on the wall. I know of one lab where all the plants are on a conveyor belt, and every day they are moved into, and rotated in a custom photo-booth - but at a budget and size a bit beyond the MVP. You will also notice that most examples are with simple rosetta leaved plants, not lettuce where the leaves are tightly bunched together - there is a reason for that.
Disease recognition is gaining traction, but this often is with thousands of images using AI (a bit more horsepower than the Raspberry). If/when we can move data to the cloud, we may be able to take advantage of some cloud computing.

Measuring pH is working (General Hydroponics Test Indicator), and I think it is something we want to pursue. There are two issues that need to be worked out before it is ready. 1) Identifying the strip/sample. Right now I do a mouse click on the location in the image to identify it; there needs to be some sort of ‘target’ that can automatically identify where the strip is located. 2) Color calibration needs to be automated. Different cameras and different lighting will shift the color reading of the sample and the control card. The card needs to be found, and its colors calibrated; then the strip can be reliably compared to the calibrated readings. Comparing colors is the easy part. I am confident that once reliable code is developed for pH, it can be applied to any colormetric test (nitrogen, phosphorus, etc.).


#16

This might be helpful to anyone interested in working on CV with plants. I would be very interested in continuing the conversation with anyone as would @webbhm


#17

Reading the PlantCV source code to find OpenCV functions that can be used directly is a great use—suggestions for how to facilitate that would be welcome. One person suggested listing the “workhorse” OpenCV functions directly on relevant documentation pages.

The paper describing PlantCV v2 was recently accepted, and should appear online soon, so now is a particularly good time for feature requests. PlantCV does not currently have code for automatically identifying segmenting in arbitrary arrangements, but the paper notes that such tools could be added if there is sufficient interest.

A few of us are currently investing quite a bit of effort in color-related analysis, mostly on both optimizing photo capture settings and post-processing correction with X-rite ColorChecker cards. If someone provides example images with pH strips or whatever we can explore detecting and localizing those automatically.

Stereo and orthorectification-stitching are definitely things I want to explore, because my RasPiCam fields-of-view overlap by ~25%. There is a nice (paywalled) pair of papers similar to my interests, but unfortunately I do not think the code underlying them is public.


#18

@jshoyer @Webb.Peter
It is interesting to see us tracking almost the same problem spaces. I am working with the “$300 Food Computer” and seeing what can do with OpenCV and a Raspberry camera (though also have USB and NOIR)
Initially I was looking at stereo (with point clouds for dimensional analysis) but have set that aside as needing more work on calibration issues and controlling the initial set-up of the cameras. I have some crude pH analysis working, but have recently focused on getting some crude metrics of area, width and length.
I really like the PlantCV focus on controlling the context to avoid computational problems. I switched to a white surface for growing the plants, and have found myself covering stray wires with white tape to avoid artifacts. My current analysis has the following workflow:

  1. Initial identification of plant locations (look for the black plant wells) and using this information for isomorphic correction.
  2. For the actual plant images, I first prepare them by:
    a) Resize
    b) Warp (isomorphic correction) (cv2.getPerspectiveTransformation)
    c) White balance
    d) Cut each plant into a separate sub-image
  3. For each plant:
    a) Create green mask (cv2.inRange)
    b) Edge detection (cv2.Canny)
    c) Get contours (cv2.findContours)
    d) Analyse contours (aggregate and sum)
    It is a bit erratic (good bit of ‘noise’), but when the data is plotted there is a fairly consistent growth curve.

The pH work is on hold, the real issue is detecting the color card and strip (currently click on the location in the image). I need to work on the auto-detect, and secondarily on the color correction (probably combine a pH scale card with a gray scale). The actual comparison of the sample value to the
is trivial. I am using General Hydroponic pH Tester (liquid).

Mask_4

gh_ph_chart


$300 Food Computer
#19

Thought I would share this: “Google touts this as a cheap and simple computer vision system that doesn’t require access to cloud processing, because of the extra processing unit. It suggests several simple uses, including setting up cameras to detect your dog or car. But it also offers more interesting possibilities, like identifying plants and animals with the kit.”

@TechBrainstorm @wsnook @jshoyer @al_payi @house @goruck @gordonb @Eddie @thiemehennis @devtar


#20


This chart was produced from camera images in my MVP. While interesting, it is a bit of a lie; the growth does not level off, but the plants became so large and dense that the entire camera frame was green. Do to image correction, the plants are not all the same size either (hence the spread).
The issue here is not to say that CV does not work, but that we need to develop the controls and conditions that make it a useful tool. This also shows that while individual data points may not be accurate, the aggregate data shows some consistent patterns.