Growing food: I just ordered a MicroGrow Kit from Hamama


#21

Technicals (especially with me talking) get boring very quickly. I like to
keep this public so anyone can join and give input, but it should
definitely be a topic under ‘Data’ so anyone not interested in data can
ignore us., I would suggest a new category for the MVP, as I can envision
many different topic threads under it.
Any conclusions (definitions, formats, …) should be put on the wiki.
I am not sure if the MVP should end up in the wiki. That depends upon how
much it is a sponsored project of OpenAg -vs- a community activity that
operates on the fringes of the community - either way, no decision needs to
be made on that at the moment.

HW


#22

Wise words. That sort of discussion would be better off in its own thread.

[edit: @rbaynes, I saw your DM and sent you a reply. I’m definitely interested in finding ways for people with an official PFC build to meaningfully exchange data with folks using other equipment. My bias is in favor of iterating on working prototypes as the main way of moving forward–I think that’s what Caleb was suggesting.]


#23

I agree @webbhm and @wsnook. We can keep data topics in that category and iterate details of design there.

My current focus is on the definition of all I/O of the system. I want all the various forms of “food computers” (MVP, PFC, server, factory) to use the same data definitions. That way they can all share and benefit from results / research.

I also agree on an iterative approach to defining the data. I think a high level design is in order with a short list of first parts to tackle.


#24

#MVP Hardware Needs

Using my three tier concept of a MVP, I want to speculate about the level 2, and particularly the sensors. I am assuming either a generic bus tub, or a custom reservoir; but I think the growing environment does not need to be standardized at this point, neither does the enclosure; though we should offer one or more suggestions. While we may have several lighting options, as long as we know the PAR equivalency of them, we don’t have to have one specification.

Without CO2 regulation, a CO2 sensor does not seem to add much value (assume normal atmosphere of 399 ppm), though an enclosure will have some variation (negligable? - good trial experiment!). If you have an air pump running in the water, the O2 is also probably not a concern (assume 100% saturation). Follow a nutrient change-out schedule and you can avoid the dosers, I think we can also avoid the water chiller. That basically gets rid of the expensive parts, and still gives a lot that can be experimented with.

The critical standardization seems to be sensors, the question is what is the minimal sensor set, and what are their priorities (now we can get a real debate raging!!!). I am assuming a RaspberryPi (Zero?) and an arduino, all UI is done through a web interface and another computer, no screen for the MVP.

  • I think the minimum is a temperature/humidity sensor (SI7021?)
  • With an enclosure, there will be a need for air circulation and ventilation. PC fans can probably do the job, and run off the arduino 5V. This will require some programming for fan control (though the circulation fan could always be on). I don’t think there is a need to monitor air circulation, and the cfm (cubic feet per minute) rating of the fan can get us in the ball park.
  • I am tempted to say a LUX meter should be the next addition, but other than being simple and cheap; I am not sure it adds much if we know the light output (it should always give the same reading).
  • @Webb.Peter has argued that a camera should be a high priority. I am beginning to be convinced that he is right. 1) It has a high social value, the ability to post pictures on SnapChat, Twitter, Facebook, etc; and the ability to ‘check up’ on the plants remotely from a phone (with the right UI). There is a high value to keep the excitement level up. 2) Images have the potential to provide a lot of data (with the right software) and avoid the need for so much manual phenotype data entry. With a timestamp, you can tell when the plants were started (or at least transplanted), harvested (no more plants) and other metrics like size and possibly health. There is a lot that can be done here, and the possibility of going back later and extracting new information with new logic. 3) While not the cheapest sensor, the Pi camera is a great camera for the cost ($30). This is an area where I don’t trust my judgment and would like other’s opinions. At my age I am not as social media aware as the younger generation, and I don’t know what @Caleb has up his sleeve for AI work with images.

How smart could we get with using images for data entry? For pH, I am thinking of a standard color card (with a QR code to indicate this is a pH reading?), use General Hydroponic’s pH kit, mix up a sample and image it in front of the card. Hit a button to take the pH image, and the logic could identify this as a pH reading. Are there other colorimetric readings that would be useful?

From a hardware perspective, I think this basically covers it - with one more piece. I think it would be highly valuable for OpenAg to have an Arduino (or Pi) shield and cables to simplify hooking up sensors. It wouldn’t need to be much more than some I2C plugs, and plugs for the fans. There should be the potential for a lot of expansion (other sensors), in which case this may be the same board as the PFC (if the cost is right). I definitely don’t think we should expect people to wire up and maintain breadboards (though that is what I am currently using), or learn to do much soldering. If OpenAg offered a board for sale (and possibly the sensors), it would go a long way toward the MVP.

Software is also an issue. Does this need the full PFC stack, or could something more stripped down work (cloud data storage?). How frequently do sensor readings need to be taken (cpu demand)? Once an hour? Maybe a bit more frequently, and the camera can detect when the lights go on and off (no need for a LUX sensor). Plants are constantly changing, but not by that much. This may be a three ‘node’ software architecture. Something light on the MVP end (just collect sensor data), a light weight UI for checking the MVP, and some robust data storage, analytics, charting and display on a web server/UI. We probably need to think software through from a MVP perspective, what do we need to get started, and what can be added over time.


#25

I agree. For the microgreen trial run that I finally started a few days ago, I’ve been using a Raspberry Pi camera with rsync and iMovie to make time lapse videos and supplementing with a few iPhone pictures.

Here’s a screenshot of my time lapse tweet showing what my radish seeds did today:


The video runs at 10 frames per second with the frames taken approximately 13 minutes apart. I squashed 14 hours of clock time down into 7 seconds.

The Raspberry Pi v2 camera is really good. What I really like about it is that the raspistill command that comes with Raspbian just works without any fuss. It does help to use a neutral density filter if you’re using fluorescent lights, but that’s normal. My previous experiments trying to get a cheap usb webcam working with fswebcam and the linux uvc driver were painful.

For my temperature logging, I’ve been using a hysteresis algorithm to filter out small fluctuations less than 3/32 of a degree Celcius. I append a line to a TSV text file when one of the measurements changes by an interestingly large amount. The TSV file looks like this:

timestamp	041682C7F5FF	041682C8E3FF
2017/04/04 04:26:30	21.375	22.437
2017/04/04 04:36:09	21.375	22.312
2017/04/04 04:38:21	21.375	22.500
2017/04/04 04:38:34	21.375	22.625
2017/04/04 04:39:37	21.375	22.750
2017/04/04 04:44:14	21.375	22.625
2017/04/04 04:47:23	21.375	22.500
2017/04/04 04:50:45	21.500	22.500
2017/04/04 04:52:44	21.625	22.500
2017/04/04 04:55:41	21.750	22.500
2017/04/04 04:57:53	21.625	22.500
2017/04/04 05:00:11	21.500	22.500

2017/04/06 21:47:54	23.187	22.937
2017/04/06 21:49:47	23.062	22.937
2017/04/06 21:51:41	22.937	22.937
2017/04/06 21:54:43	22.812	22.937
2017/04/06 21:57:15	22.812	22.812
2017/04/06 22:00:30	22.937	22.812
2017/04/06 22:09:19	22.812	22.812
2017/04/06 22:12:15	22.687	22.812
2017/04/06 22:15:49	22.812	22.812
2017/04/06 22:33:47	22.812	22.687
2017/04/06 22:37:27	22.687	22.687

Since I’m just saving images and the temperature log as regular file into a data directory on the Raspberry Pi, I can easily rsync it all to my mac workstation whenever I want. The TSV (like CSV, but with tabs) format can be copied and pasted directly into google sheets, so I’ve been charting the temperature manually for the time being. Here’s what it looks like so far:


The blue line is from a DS18B20 that I taped to the bottom of my water tray, and the red line is from a second sensor that I’ve suspended in the air a little behind and above the tray.


#26

What I’m doing so far is serving a dashboard page that lets me check on latest camera image along with system health information (how full is the disk, etc.). I also set up a file server for my whole data directory, so I can manually check the sensor log or click back through old images.

To avoid having to teach people about lots of risky silliness with trusting self-signed TLS certificates, I’m just keeping the web UI all GET requests–no interactive controls for now. For changing settings, I’ve built command line tools that I can use over ssh. It works well–even on my phone. What I need for now is something that’s easy to work with on a home LAN, and that works.

The big advantage of a cloud dashboard on the public Internet is that it works with the PKI and trusted CA certificates that are already installed in everybody’s browsers. Having the devices phone home to the cloud servers gets around the whole self-signed certificate problem, and it makes it easy to check on things from outside your home LAN–both very desirable advantages.

Setting up infrastructure for a public service is serious business though. Were we to go down that road, it might make more sense to use something like Particle.io rather than rolling our own–they’ve already built a solid ecosystem that’s more or less free for hobbyists to use on a small scale. You could also make a valid argument that–for getting started–we don’t need to worry about anything outside of a home LAN, as long as some provision is made for people to upload data. Making github repos might be a very effective way to handle that (e.g. rsync data from Pi to laptop, pick the stuff to share, put it in a git repo, commit it).

[edit: I almost forgot the screenshots. The first one is my web UI’s dashboard page, and the second is from me looking at the temperature log file in the data directory]




#27

I like the direction of this.
I am however a fan of keeping all data and filtering things as a part of analysis; ie. capturing data on a time basis rather than event/delta basis. At this point I am not sure it makes much difference, but I have seen experiments where there are problems with one data set being time driven (check temperature every 5 minutes), and another data set being event driven (record the temperature when there is a delta greater than .5 degree). Delta processing is standard for GPS devices which record once every second, but only if they have moved farther than a certain distance from the last position. My preference is to have things time driven for the initial/primary recording of temperature, images and most sensors.

Having said that, there are true event driven observations. I was thinking of using the camera to record when the lights came on or went off. Here you would have the camera running and be looking every minute for when the overall brightness dropped below (or went above) a specific brightness. This would be a monitoring loop separate from the image capturing (which might be once an hour); though it may control the image capturing so that you don’t take images when all the lights are off.

The key thing is that you have a working system, and that is always the first goal. However; for the general public, the less manual processing (moving files) the better.


#28

I’ve been thinking along those lines.

So far I’m up to 1.5GB of images from the last 4 days. I’m taking a frame about every 2 minutes, but with a 14/10 light cycle, lots of those are solid black or close to it. Because I’m using a two step image pipeline–save to ramdisk with raspistill, then resize into data directory with imagemagick’s convert–it seems plausible that I could easily detect light levels, log them, then skip saving the all black images.

One approach would be to try making a histogram of the image, but that doesn’t take the camera’s auto-exposure into account. Using ISO speed alone is good enough to easily distinguish lights on from lights off. Here’s what my EXIF metadata has for the ISO speeds 10 minutes before and after my grow light turned off last night:

$ exiv2 2017-04-06_{19.5,20.0}*.jpg | sed -nr 's/ ISO speed +:/, ISO/p'
2017-04-06_19.51.19.jpg, ISO 50
2017-04-06_19.53.27.jpg, ISO 50
2017-04-06_19.55.36.jpg, ISO 50
2017-04-06_19.57.45.jpg, ISO 50
2017-04-06_19.59.54.jpg, ISO 250
2017-04-06_20.02.03.jpg, ISO 250
2017-04-06_20.04.12.jpg, ISO 320
2017-04-06_20.06.21.jpg, ISO 320
2017-04-06_20.08.30.jpg, ISO 320

ISO 50 is with the grow light on, ISO 250 has a tiny bit of glare from the tail end of sunset, and ISO 320 is all black:


MVP - Product Design
#29

@webbhm Continuing on the camera theme, I learned a couple new things that have me thinking we need to pay close attention to camera optical specs.

My microgreens got big enough today to take the cover off of the seeds, so my Pi camera has a lot more to look at. I’m having a hard time with depth of field and adjusting the focus. It’s easy enough to tweak the focus with a toothpick when I take the neutral density filter off, but I need the filter to control banding under the fluorescent grow light.

If I were using a camera with maybe a couple stops narrower aperture, I could ditch the neutral density filter and manual focus adjustments would be easier. I’m not sure how much that would help the depth of field–it might take several stops to improve much. Using a longer focal length lens with more distance from the camera to the plants might help, but then the relative arrangements of grow lights (need to be close) and camera (would need to be far) could become problematic.

The Raspberry Pi v2 camera module that I’m using has a wide open f/2.0 aperture. I’m curious how much difference it would make in depth of field if I switch to the f/2.9 v1 module. Just nine tenths of an f-stop isn’t huge, but it’s a change in the right direction. The v1 module is also cheaper.

Here are a couple images I took today that show the depth of field problem (before & after I moved the camera):


#30

This thread has quite a lot of useful info. I agree with @webbhm on a lot of his points and I’d like to continue the discussion on the MVP PFC.

I’ve added this to the wiki and I hope it can act as a badly needed summary for our discussions. I’ve populated most of the sections with my own plan, officially documented here

@Caleb I’ve chosen to use a cloud hosted platform for my food computer, allowing me to use a very cheap brains box. My build is still in its early stages but I hope to be able to connect it to a central recipe database.


#32

I like this project for it’s simplicity. Would like to see more projects like this. I am saddened by some of the initial comments who lacked vision or could not see merit in it.

Sometimes the best teacher is just by doing and jumping in with your feet first. No need for a PFC for that. You can always build upon simple principles later and improve or scale up from there.

I like the original posters intent for this thread. And though i agree standards need to be discussed, and an interesting discussion that followed, it seems off topic for this thread.