More data: 32 Days of Lettuce: time lapse images & sensor log


#1

@Webb.Peter, @webbhm, @rbaynes, @_ic, @gordonb, @spaghet, @Caleb, and anybody else who cares about figuring out what the MVP food computer requirements should be: Here’s another data dump that might help assess the usefulness of the Raspberry Pi v2 camera module.

My CC BY-SA github repo has 32 days of sensor data, time lapse images from a Pi camera, and a short video compiling all the images together. I’m hoping somebody can speak to whether these images might potentially be useful. The Pi camera’s focus and field of view aren’t the greatest for close up detail work, but you can still see a lot.

My assessment is that the images pass the social media “pictures or it didn’t happen” test, but I’m not confident the quality is good enough to be of any scientific value. Does that even matter in the context of an MVP food computer? I don’t know. Thoughts?

Here’s an example image:


#2

Very curious about this as well, perhaps @Eddie can help, I believe he’s working on the computer vision software. I know the V2 has two camera’s (aerial & side) perhaps one camera angle has proved to not be sufficient from their previous experience? I know in the fall update video they show calculations based upon camera’s that were on the V1 Food Server (perhaps even if the V2 software isn’t flexible to adapt between systems this one might prove useful for the MVP?).


#3

Hi Peter,

The idea behind the two camera setting is the possibility to provide 3D sensing and plant canopy reconstruction. Right now, we are conducting research about these topics. However, I believe that only one camera (aerial) would suffice to provide the standard CV functions that we already achieved in the V2 (plant measuring, leaf detection, etc.).

Regards,
Eddie


#4

Hello Eddie,

I really appreciate you clearing this up for us so we aren’t just guessing. I’m extremely interested in how you hope to utilize the two camera’s for 3D sensing. Plant measuring/leaf detection are the two primary things I think at this point that we’re concerned with for the MVP.

The Danforth Plant Science Center here in St. Louis has developed a platform called “PlantCV” and has 10,000+ Raspberry Pi’s hooked up to camera’s all throughout their greenhouses. I would be more than happy to reach out to the people running PlantCV to see if there are ways we could use it for OpenAg if interested. I’m pretty sure OpenAg is using OpenCV now, which is the base platform for PlantCV.

Here’s a link to the PlantCV GitHub