Data Model Exercise: Cornell Hydroponic Lettuce


This is a placeholder for discussion about data modeling. We will take the Cornell Hydroponic Lettuce handbook and the context of the Food Computer v 2.0, and work through some exercises to see what sort of data examples and structures we can come up with. This is a spin-off from the discussion here.
Below is a data/process model that can be used as a starting point for the discussion.

Getting Recipe Data - the Old Fashioned Way
Sensor Data Modeling
Open source project proposal - call for contributers
MVP - Product Design
How to move forward on sharing data? - Plant Recipes

Not sure if this is quite within the scope of what you have in mind for this topic, but it seems like you’re going to need a recipe for lighting. The handbook recommends daily light integrals (DLI) of PAR for different stages of growth:

  1. “The first 11 days of lettuce production takes place in the seedling production area”
  2. “no less than 50 μmol/m2/s of PAR (Photosynthetically Active Radiation) during the first 24 hours the seeds are kept in the germination area”
  3. “For the remaining 10 days, the light intensity is maintained at 250 μmol/m2/s. The photoperiod (or day length) is 24 hours.”
  4. “Uniform light distribution is required in the Pond Growing Area. A supplemental light intensity within the range of 100-200 μmol/m2/s (for a total of 17 mol/m2/d of both natural and supplemental lighting) at the plant level is recommended. It should be noted that 17 mol/m2/d is the light integral that worked best for the particular cultivar of boston bibb lettuce that we used. For some cultivars, 15 or mol/m2/d is the maximum amount of light that can be used before the physiological condition called tipburn occurs.”

Here are a few sources I found that might help with calculating PAR photon flux from lux using conversion factors for the type of light source:

It might take some work to figure out how to apply those sources to the PFC2’s custom LED array, but knowing how much PAR it can crank out seems important.


There are a couple of issues here, but I think you are on track.

  1. Recipes need to be sub-divided into cycles or periods (first 11 days, remaining 10 days)
  2. Duration needs to be defined, and this may require several methods:
  • Time defined duration (minutes, days, …)
  • Event defined duration (do this until germination, flowering, etc)
  1. I would classify light as a treatment

This could be defined in jason or some other language. I am not attempting formal formatting below, but will CAPITALIZE user terms and lower case standard vocabulary:
recipe {
duration{11: day},
duration{1: day},
treatment: light{
quantity{50: mol/m2/s},
duration{24: hours}
duration{10: day},
treatment: light{
quantity{250: mol/m2/s},
duration{24: hours}
duration{10: day},
treatment: light{
quantity{17: mol/m2/d},

I am a bit more sketchy on this second part - define the context, assumptions and environment of a recipe. It is a bit like a chocolate chip cookie recipe that says “bake at 350 for 12 minutes”. There is an assumption here that you have a oven. And there are assumptions about what a ‘normal’ oven is like and how it works (temperature control). There are deeper assumptions that the oven probably has a manual describing how to use it (turn it on, set the temperature, …), and even deeper assumptions that you know how to find the manual and are able to read it. We will have to struggle through with how much to specify in the recipe, and how much to assume as the context and that the context is documented.
For now, I will assume the context is a Food Computer ver 2.0, and that it has a light which is capable of being adjusted to provide the required PAR. I am assuming that OpenAg will provide the specs on the light (PAR output) and information on how the PAR may drop off over time as the light ages. These details should not be a part of the recipe, but probably should be linked via a URL from the recipe.

OpenAg Food Computer {version: 2.0},
repository {
type: git,
url: “

Par meters are nice, and we should calibrate things by PAR, but the meters are quite expensive. I borrowed one this past weekend and roughly calibrated my lights, and hope I can use a cheap LUX sensor to determine a rough equivalency (which varies by product). Hopefully this calibration and cross-mapping will be provided with the Food Computer.


“Toto, I’ve a feeling we’re not in Kansas (corfields) anymore.”

I thought that chemicals (nutrients) would be fairly simple, as I was use to them in field agriculture. Apart from heavy duty research with plant (or plant part) bio-assays; chemistry comes down to treatments applying fertilizer) and soil sample assays. Soil samples are done by farmers about every three years, with one sample taken about every three acres. The chemical results are fairly standardized for reporting (for an example, see:, though with a few variations for protocols (Bray -vs- Olsen for phosphorus in alkaline soils). Main issues are N-K-P, and possibly some micro-nutrients, along with pH (and buffered pH). There is also likely to be percent of organic matter and soil texture (they are growing in dirt, after all).
Fertilizer is also fairly standard, defined as how many pounds were applied per acre (or smaller units for micro-nutrients, when applied). With newer variable rate fertilizing, the field is broken up into grids, and the amount of fertilizer applied is tracked by grid, rather than by the whole field - same information, just smaller level of tracking. Normally there is one treatment/application per year, though occasionally there may be a ‘side dressing’.
When we get into hyrodponics (or airoponics), we are in a whole new world. There is no ‘application’ of fertilizer, and likely no chemical analysis (other than watching pH). This issue will likely require some discussion, but I am going to throw out some thoughts. Critical to research will be having computable access to fertilizer information. It will not be acceptable to have a reference to a text document (ie “refer to the Cornell Hydroponic Lettuce Handbook”), or simply say you used Jacks 20-20-20 (or is that “Jacks Classic All-Purpose 20-20-20 with micronutrients”?). My inclination is that the recipe will may have a two or three part reference.
One part would be a traditional chemical breakdown of N-K-P, listing the chemical and percent; this would not be the percent on the label, but the percent applied (ie you might dilute the fertilizer below the standard amount). At issue here is whether it should be simple N-P-K, or if nitrogen should be listed by source (amonia, nitrate, nitrite, …). I am inclined to keep it at N-P-K, and move the recipe details to a standardized product table.
The second part would be a reference to a product table, that would have the standard brands, and allow for custom recipes (mix your own chemicals). This table would have common product names, a url reference, and the detailed analysis (20% nitrogen; 2.1% nitrate nitrogen, 17.9% urea nitrogen).
The third part would be a protocol of how this was mixed. With Cornell, it is not an exact weight, but the final adjustment is based on reaching a specific conductivity. There is also the case where a recipe is composed of two other recipes; ie, the Cornell formula, or Jack’s Hyrodponic 5-12-26 which is to be put in solution and mixed with Calcium Nitrate to achieve the final solution.

Below is the recipe with the additional data, in json format:

{“recipe”:{“context”:{“OpenAg Food Computer”:{“version”:“2.0”},“repository”:“type”:“git”},“url”:“",“agriculture-type”:{“id”:“hydroponic”},“fertilizer”:{“nutrient”:[{“nitrogen”:{“percent”:20.5},“phosphorus”:{“percent”:12},“potassium”:{“percent”:26}}],“EC”:{“ppm”:850},“formula”:{“name”:"Jack’s Hydroponic 5-12-
26”,“protocol”:{“url”:“"}}}},“life_cycle”:[{"SEED PRODUCTION”:{“duration”:{“days”:11},“sub-cycle”:[{“DAY ONE”:{“duration”:{“days”:1},“treatment”:[{“light”:{“quantity”:{“mol/m2/s”:120},“duration”:{“hours”:24}}},{“temperature”:{“min”:20,“max”:28,“target”:24}}]}},{“REMAINDER”:{“duration”:{“days”:10},“treatment”:{“light”:{“quantity”: “mol/m2/s”:250},“duration”:“hours”:24}}}}}]},“GROWTH”:{“duration”:{“days”:10},“treatment”:{“light”:{“quantity”:{“mol/m2/d”:17},“duration”:{“hours”:18}}}}}]}}

Data modeling: What does a recipe need to include?

FWIW, you’re framing things in way that seems out of alignment with my reading of what hydroponic nutrient suppliers say is important. As a couple of examples, I’m thinking of the JR Peters Hydroponics: Growing hydroponically with Jack’s page and General Hydroponics feed charts.

Here are a few of the major things I’ve seen getting emphasized:

  1. What is the pH, EC, and nutrient balance of your water before adding nutrients? This can vary a lot based on the water source. Calcium and Magnesium are important in relation to this.
  2. How do you prepare the nutrient solutions? It’s apparently easy to precipitate out important stuff if you mess up the mixing process.
  3. There’s a progression of nutrition that should be followed from seedlings to vegetative growth to fruiting. The relative proportions of blue and red spectrum light also affect these stages of development in important ways.

[edit: The essence of what I’m getting at is, if the goal of data modeling is to capture the important variables of a process, then what does the process look like? What are the main factors that it always needs to control for to avoid predictable failure? What are the variables that can be adjusted, and within what ranges, to explore interesting qualitative differences in successful crops?]

[edit # 2: A potential lens for viewing this might be, “How can we achieve repeatability of a desired initial nutrient solution composition?” Maybe there needs to be a process to compensate for differences in municipal water supplies. Maybe everybody could start by buffering distilled water with a Cal-Mag solution.]

[edit # 3: To relate all of what I’ve said so far to what you were saying earlier, I’m wondering if maybe your model ought to specify invariants in addition to variables. Like, "As long as you follow this process… to ensure your initial solution meets these critera…, then these are the important measurements you need to record about the additional nutrients you added: …]

Can I help OpenAg without building a PFC?

#Recording water EC, nutrient preparation (a short digression of the topic)
I totally agree with you that this information needs to be captured. What I am struggling with is where and when it is captured.

  1. This is not recipe information (the Plan). The recipe is static information recorded before any activity is performed; the repeatable prescription of what should be done, not a description of actual completed activities. At the moment I am limiting myself to recipes. It is here (or in a reference) that the instruction on how to mix the fertilizer should be recorded (ie what is on the JR Peters label or website).
  2. Preparing fertilizer is an action, but not of the same category as operational observations done in the normal process of growing (like temperature sensor readings, current pH, planting or harvesting).
  3. This is a separate, supportive activity; more like a farmer mixing up a tank mix, or possibly like the repairs done to the tractor. It is necessary for the farming operation and needs to be recorded, but not as a direct activity to the field or crop. This is necessary and important information, I agree with you that it is critical to capture this; but, this may likely be a separate table (or group of tables). I doubt it will be all the details (the processes of measuring, mixing, …) but likely a summary of the date, initial measurements (water EC and pH), weights of ingredients, and final EC and pH. This could very well be two records, the creation of a standardized batch (ie on Monday I make up 10 gallons of stock fertilizer solution - batch #123), and a later usage of the solution (ie on Friday I drained the system and replaced it with 3 gallons from batch #123).
    Again, we need to define workflows to identify the activities we collect data for; but I am not yet dealing with activities, I am not that far along. At a high level I am following the pattern of: Plan, Act, Analyze and Decide. This is only the first part - the Plan.


You might be taking the steep way up the mountain here. It isn’t obvious to me that working stage by stage through the plan, act, analyze, decide framework you’re talking about is going to be a practical or efficient way of answering the questions you’re trying to answer.

The task might be a lot easier if you framed it differently. In terms of end results and overall time investment, perhaps it would be more efficient to iterate toward a process that gets repeatable results, then describe the important aspects of that process once you understand what they are.


#The devil is in the details (or some gardeners would say in the parsley)
I think we all agree that a recipe for a particular plant needs to specify:

  • Light (either the fixtures to use, or PAR or LUX)
  • Nutrients (what to buy, or what to mix)
  • Water (how apply: spray, mist, float)

Getting more detailed, we could add:

  • Growth medium (rock wool, rock, soil)
    Humidity (enclosures, augmentation, …)
  • CO2

Everything else is details: how often to do it (lights on 18hr, off 6hr), changes over time (24hr light for first 10 days, then drop to 18hr). What we don’t agree upon is the level of detail, how significant a particular detail is to my goals. How much does it matter that my plants get hit by morning sun (boosting LUX to 25,000), or that the plants are not enclosed and the humidity is 40% (but may drop to 30% on cold days), or that I have a great furnace the keeps the house at a steady 70F?
The strength of OpenAg is that it is setting a high standard (CO2 monitors, refrigeration, …) and hence has a lot of details (and cost). For those of us who don’t have the same research goals, the problem is knowing what details we can skip and still get good results (this being undefined).

The real question is whether OpenAg is wanting/willing to support this group (possibly its own forum Category?) or if this needs to be moved to a different forum. @Caleb? It would be nice to have a place to share information like: recycled strip LEDs don’t work (92 PAR at 6 inches) unless they are packed tight, while GE 100w equivalent LED bulbs (in a reflector) do nice for small spaces (182 PAR at 6 inches), and a CRE COB attached to a cpu cooling fan will give you ridiculous amounts of light (1,200 PAR at 6 inches).

This is not a matter of one group being right, and the other wrong; but back to my initial post that we need to know our goals. I personally find myself with a foot in each camp, I want to support professional level research, but I don’t want to leave the casual user behind. I will keep this topic for the technical details, and hope a category can be set up for the less technical.


#Act: Collecting observations
Observations are recordings of particular events in time and space, or as the OBO puts it, a measurement of an attribute of a substance (ie. the centigrade temperature of the air at the top of the food computer at 10:48 on 3/14/2017).
There are roughly four categories of observations:

  • Environmental observations
  • Temperature, Humidity, PAR, …
  • Phenotypic observations
  • Plant size, color, development
  • Primary (agronomic) activities
  • Planting, Treatments, Harvest
  • Support activities
  • Mixing of fertilizer
  • Maintenance of structure (replace sensor, pumps, …)

The food computer has the environmental observations fairly well covered. There are still details here to be defined (is temperature Centigrade or Fahrenheit?, how many decimal places?), but it is mostly in place already.

Support activities are often needed more for administration than research (assuming the fertilizer was mixed according to recipe instructions and does not need to be verifiable). This will be more difficult to collect with the food computer, as it usually needs an application with human data input (a task everyone wants to avoid).

@Caleb : do we really need phenotype data and agronomic activities recorded by people? I have thought of starting into some long posts on the details of what this takes, as it is a difficult subject; but I am beginning to wonder if OpenAg is about to throw a ‘hail Mary pass’ and avoid this all together. If OpenAg is going to have 3D imagery (and/or point cloud data), how much does that replace human observations? Especially with a plant like lettuce, all growth of concern is the vegetative stage. Unlike corn, there is not the tracking of reproduction and seed development: when it tassels, sheds pollen, or when the kernels dent and develop the black layer stage. Even with something like tomatoes, the camera can derive much of this phenotype and development data.

If this is true, there is little more to be done with observations, as the heavy lifting goes into the analysis phase.



tl;dr: What follows is a long-winded way of saying, “I’ve ordered the stuff so I can start playing with a Raspberry Pi Zero W camera pointed at a tray of microgreens. I want to explore ways of generating image datasets that might be useful using computer vision to assess phenotype expression. By focusing on little plants, I figure I can cut to the chase without spending a lot of time and money on all the stuff it takes to keep bigger plants healthy. Is that at all interesting to you?”

What do you think about taking some initial steps at data modeling–particularly phenotypic observations–in an even simpler context than Cornell’s lettuce handbook? There hasn’t been much talk here about starting seedlings or growing microgreens, but those might be areas worthy of more attention.

I’m thinking about this in terms of decomposing data modeling into tasks and stages that can be tackled separately. In particular, I’m most interested in focusing on tools, techniques, and procedures for observing, recording, and sharing data about phenotype expression. One way to get at that more directly would be to spend less initial attention on systems to manage the climate control, nutrition, and irrigation needs of larger plants.

My apologies if this next part seems off the topic you had in mind, but by nature I take a wholistic view of things–to me it all seems closely related…

Anyhow, I think your question to Caleb highlights an important opportunity. Sophisticated analysis based on computer vision techniques and AI could take the place of a lot of other equipment and tedious note-keeping. For example, based on what I read in Resh’s book about visually diagnosing nutrient solution imbalances, cameras might have the potential to do better than EC sensors.

Assuming people are interested in heading down the AI and computer vision trail–I for one am–then the big challenge becomes generating large, meaningfully labeled datasets. As I understand it, all the fancy new algorithms don’t do anything useful unless they have huge datasets to chew on. In that case, the big data modeling question becomes, “How do we start taking pictures, label them, store them, and share them in a way that’s useful?”

Is that at all interesting to you? Does it give you any ideas?

Growing food: I just ordered a MicroGrow Kit from Hamama

@wsnook, @rbaynes, @hildreth
Into the Swamp A first crack at defining data and data organization.
The goal of the PFC (Personal Food Computer) or MVP (Viable Product) is to produce research data. In my mind there should be no different between the two as far as data - both what describes the recipe and the data collected.

Experimental data describes the context (equipment & protocols) and the variables. A variable is anything that can be controlled and varied, whether it is actually varied in a particular experiment or not. For a MVP, the light spectrum cannot be varied, but it has a (hopefully) known spectrum. This allows comparisons with a PFC where there could be experiments varying the spectrum. A key task of this data exercise is to identify all the significant variables.
Observations are whatever is measured. All variable should have a measurement associated with them. For ‘fixed’ variables of an experiment, the measurement may not be an observation, but is specified in the recipe or reference data (“The GE Light Stik spectrum is …”), for others this is what is recorded by the sensors or observation. Observations are also made on things that are not variables, these are usually phenotypic traits of the plant: height, weight, color, taste, nutrient content, … Most of the variables will be relatively easy to define, the phenotypic traits can be a challenge to define and standardize.

A research experiment can be of two main types. There are those that are planned out before doing anything: “I want to perform two trials, comparing 6.5 pH with 7.0 pH”. In this case there would be two trials, one where the water is at 6.5 pH, and another where it is at 7.0 pH. The assumption is that all other variables are the same (type of plant, lights, nutrients, temperature, …). The second type of experiment is what I call a data experiment; this is searching a database of already completed trials, looking for data where the variables meet the conditions you want to research. The former is what a class room is likely to conduct, the latter is the ‘big data’ research that Caleb talks about.
In both cases, there is a research plan. A plan says what it to be done, and how it is to be done. A plan consists of several sub-plans:

  • Experiment Plan: what is being investigated, the types of trials to be used, how the data will be evaluated.
  • Trial Plan:
  • Environment or context: the equipment to be used (ie V 2.0 PFC)
  • Recipe of what and how to grow for each growth stage (germination, vegitative growth):
    • Genotype (type of plant)
    • Light: intensity, spectrum, duration
    • Temperature (possibly air and water)
    • Humidity
    • Air circulation
    • Nutrients (amount of each component, frequency)
    • Conductivity
    • pH
    • O2 (possibly air and water)
    • CO2 (air)
    • Agronomic practices (pruning, set-up, cleaning)
  • Observation Plan: what measurements will be collected, when and how (automated sensors, manual data collection and entry)
  • Evaluation plan: how I will determine if this was a success or failure, type of statistical models that will be used, or subjective criteria (“All of the students thought it tasted great”)

A big problem is knowing what needs to be explicitly specified every time, what can be assumed and what data can be derived. If I am not manipulating CO2, can it be assumed to be normal atmospheric content? If I am using Jacks 20-20-20 and following their formula, do I have to list all the ingredients in the recipe or can I just say “Jacks 20-20-20” (and specify the details in the reference data, or reference their website)?

In the plan, I deliberately want to separate the trial recipe from the experiment. Anyone should be able to use the recipe whether they want to run an experiment of just grow a salad (and not bother with all the observations and evaluation). A recipe should also be independent of the environment and automation, it should be the same whether I am using a MVP, PFC or an industrial vertical farm.


  • Are these a good set of variables (we can quibble labels and definitions later)?
  • Does this data structuring make sense?
  • A PFC will be intimate with the recipe and sensor observations. These definitely need to be in the database for operational purposes. I think the context and phenotypic data should also be in the database, not because the PFC needs the data, but to keep it all organized in one place; both for collection of data and exchange, this will become important when trying to do data mining. Thoughts?


Do you have a sense of how widely that view is shared?


At the risk of getting off into semantic crazyness… Ontologically the purpose of a hammer is to drive nails, if someone uses a hammer to crack walnuts, or to smash windows (following Barry Smith’s thinking on ontology); that does not change its intended purpose.
In a similar vein, OpenAg intends the purpose of the PFC and MVP to be data collection; if others choose to use it for other purposes, they are welcome to do so, but that does not change its purpose. For the sake of data discussions I am going to follow the OpenAg intentions.


That does depend on the type of the hammer you’re talking about - masonry hammers are not effective at driving nails:)

I may be wrong, but I was under the impression that PFC is intended to grow food in a controlled environment.
If as you claim the purpose of the PFC is to gather data, then it should simply be called a data acquisition system.

In fact PFC should be more about control of the environment, and the data gathered is in my opinion needed mostly for this purpose. Data acquisition in this situation is only ‘nice to have’ feature.


+1 for Peter pioneering the data collections and standardization. The potential for experimentation is what initially attracted me to the project.

The PFC is clearly not going to be used to feed anyone, but could be thought of as a trial prototype for a production scale design (food server / data center), to me this is the other intention beside data gathering. But, that trial does not necessarily require a network of PFC’s, and can be done alone. I think that at its current size, the value of having a network of PFC’s is the number of connected experiments that can be run.

If I want to maximize lettuce growth and have several ideas about how to do it, it could take me a year to get all the growing cycles needed to make comparisons. But if others in the OpenAg community have the same goal as me, we can each run a trial simultaneously and have an answer within a month or two. The trick is that these comparisons will be worthless if we are each recording our data differently.


Hi Will and Howard,

I share this view, and it is my focus for the lab. But there are other views of what the PFC is for. However none of them will be useful without good/well-defined input data to start with.



Thank you for moving the data design forward and putting so much effort into it.

re: Questions

  • I think we need to add the time series to the variables. e.g. the light is on for how long, off for how long. The total recipe expected run time.
  • For env/context it should refer to a specific PFC (its UUID) when executed, since some of the sensors (pH) frequently need re-calibration.
  • I completely agree with keeping all the data together, including notes and user observations.

I’m sorry we here at the lab have not been that responsive this week, it is “member’s week” where all the contributing companies visit the labs and we all give demos.


For recipe or prescriptive data, duration (time series) is good; you want to know the expected time to maturity.

For descriptive data (sensor logging), I don’t like it. I prefer to track when something starts (is turned on), and when it stops (is turned off) as separate events; this saves the problems of trying to update a database record and calculating duration. Duration is often needed, but this should be derived data, a query or map reduce that calculates the time between two events (start and stop).

When data is aggregated, we will need the PFC identifier (UUID), as well as the sensor identifier. The question is whether this is put on each record when the PFC creates the record (which will take up more space), or is something appended to the records when they are uploaded to a central repository (which takes more processing). I can make the argument both ways, and really don’t care which we choose, just as long as we choose one.

A side digression on duration. Working with corn, I found that duration needed to be tracked in three different formats:

  • Calendar day. This is needed for coordination with weather data and external data sources. This is often the least useful piece of information, as a crop can be planted on different days in different locations, so the actual planting date is not useful for comparisons between trials.
  • Duration as growing degree days. Corn grows at a determined rate based on temperature. To account for weather differences, comparisons of growth need to be based on accumulated growing degree days.
  • Time since a growth stage: days from planting, days from germination, pollination date, … Since growing degree days often average out, growth stages are the most common reference, particularly in informal conversations.

It was a pain when people recorded things in the latter two duration formats, but if they tracked the date and event, it was fairly easy to calculate growing degree days and duration growth stage.


#Pornography and Data
Supreme Court Justice Potter Steward, in the 1964 case of Jacobellis vs Ohio is remembered for saying “I know it when I see it”. He candidly recognized the difficulty of giving clear, categorical definitions to things that may be subjective and lack clear definition. Phenotype data has a lot in common with pornography when it comes to defining categories.

The plant on the right is clearly “better” than the one on the left, but how do we describe this “betterness”? I might say that it is “more leggy”, but what does that mean? I would like to say something like, “the plant on the left has an inter-nodal stem length of 6 inches between nodes 3 and 8”, “the plant on the right has an inter-nodal stem length of 1.5 inches between nodes 3 and 8”. Now I can make a comparison (between 1.5 and 6 inches), but this is not easy to put in a data structure, and makes little sense to most people.
I definitely want to avoid negative logic: “Not compact”, for while we as human beings easily understand context and relationship, computers have a very difficult time of it; I might as well tell the computer “Not a five dollar bill” (an equally true statement!).
To push this a bit further, how is this for a scientific experiment? This clearly has significant results, the plant on the right is near the supplemental light, where the one on the left has only the LED strip lights (and I could do a LUX or PAR measurement of the difference). However; most of the other data is a little ‘wonk’ (technical term). I have not changed nutrients for about three weeks, in part because I don’t want to move and break the plants (setting them back in their growth), and putting a drain plug on the tub would only encourage leak problems. I have temperature, humidity and LUX readings (except for the time the Pi was down). This clearly points toward what could be a significant experiment, on how slight variations of light have definite results (and need to be determined for optimum growth), but I seriously doubt that this is the quality of data we want for ‘big data’ analytics.
My point is not to say we should not allow such casual growing; after all, a few leaves were nice with my lunch. My point is that defining plants and the measurements we make is not trivial (negative logic here), along with standard protocols (how often to change nutrients); if anything, it will be much harder than building the PFC and writing code.


I think this is a valuable resource that may help begin to answer some of the questions regarding what/where to measure. This is approaching things from the plant side, as opposed to the tech side.

Plant Growth Chamber Handbook from University of Iowa: Chapter 15 – Guidelines for Measurement and Reporting of Environmental Conditions

Climate Recipees format thoughts and node.js library