We plan to setup and monitor MVP units at several schools. The issue for us is the lack of access at each school’s router to setup static IP addresses or open specific ports.
My thought then is to setup 1 server in a location where I have the ability to set the IP and ports. The school MVP’s would periodically send data to the server and check for any new instructions. The interval for the communications could be a part of the setup based on requirements.
I imagine that the server could be a Raspberry Pi, a windows PC or a cloud service.
One challenge would be ensuring a way to differentiate the data coming from each MVP unit.
I would be interested in any and all feedback, ideas and support for this project.
The MVP data will definitely need some identifiers to differentiate each unit. I have been thinking of the MAC address of the Raspberry Pi and a separate ‘experiement’ identifier (which experiment run on the Raspberry). The options are to make this a standard part of the data logging (with a one-time update of existing data), or to make it a part of an export view (dynamically add when export the data).
Long term, the experiment identifier would reference the MVP hardware configuration record, the recipe and other meta-data.
We are currently exploring the possibility of doing this via MQTT. I know @webbhm already has his data replicating to an offsite DB.
I’d like to start this conversation again to see where we’re at with getting all of our devices to remotely log and access a cloud hosted web app. The conversation around the MVP/$300 PFC lately has me thinking we want to really streamline the setup process.
A similar “teacher setup” conversation is taking place here:
@pspeth@webbhm@ferguman What are your thoughts about this install process outlined by Will? I know you’ve also been working with images and run into some issues. While I really like the idea of having an install script, I’m nervous about the idea of moving away from cron, while clunky, it hasn’t failed me yet. Also, I love that when I boot up it just works everytime (@wsnook perhaps a service can do this as well - I just know the V2 does not at this point, school me if I’m being naive): $300 Food Computer
Simpler is better. So using Cron is probably best for now.
You might want to provide a way for teachers and other MVP owners to
actually order a physical SD card that is all ready to go. The problem
with burning an an image or running a script is that 1) it takes time. 2)
so many things can go wrong. 3) It’s impossible to document the process in
a way that encompasses all the possible OS and machine hardware (i.e
laptops, Ipdad, chrome book, desktop, etc) that people use to “get their
work done” and on which they want to build and interface to the MVP.
There isn’t any inherent conflict between using an install script and using cron–they do different things. For example, you could have a script that used apt-get, git, pip, etc. to install the right versions of your code and all its dependency packages, then you could run that code with cron.
On the other hand, there would be a conflict between using cron for scheduling and using a service (like ROS) for scheduling.
It sounds like your basic problem would be to get a unique identifier for each Raspberry Pi so that you could plug that into a device identifier slot in the reports back to the central server. An ethernet or wifi MAC address would probably be a good identifier that you could check with a script or python code. Try checking the contents of /sys/class/net/*/address. For example:
Alternately, if you use DHCP to set unique hostnames on the Raspbery Pis, you could include the hostname as the device identifier.
Is that code up on github somewhere? I just looked through the code and commit history at https://github.com/webbhm/OpenAg-MVP, but I didn’t see anything like what you’re describing. Was I in the wrong place?
My test MVP is already using the MAC address in the sensor log structure. This would allow everyone to save their data to the same cloud database, yet filter out their individual MVP (or a sub-set of MVPs).
I am looking at an ‘env.py’ file to hold several variables needed for logging: unique identifier of the MVP, and an experiment identifier. I am still thinking through the structure (separate variables, or one structure containing a dictionary of variables).
The options are:
Currently I have the first, but using the second structure for some persisted variables. I think my preference is the latter, as it is easy to update one variable and save the whole structure, without needing to know what other variables are in the structure.
Any thoughts would be appreciated. I can share some test code if you want to try it out.
I’m confused about whether the code you’re talking about now is the same as what you have at https://github.com/webbhm/OpenAg-MVP. I’ve been talking with @Webb.Peter about ways I might lend a hand with documentation and install scripts. But, it’s sounding like maybe the code that I’ve been looking at on github is out of date relative to your current progress.
You are correct. I am not good with git, so my most recent code is not
even there. It is definitely not the best practice, but I tend to play
around with code till I get it basically what I want, and only put it out
on git when it is ready to share.
If you have some suggestions on how to organize things, keeping
experimental code separate from production, I am open to changes.
@webbhm Ah… yeah, git is interesting–it takes lots of practice to get comfortable with it. This thread probably isn’t the place for a github workflow discussion, but I’ll check in with Peter about ways I might be able to help things along.