BTW, you probably know this already, but in my travels on this project I discovered that DateTime is _super_ slow at constructing instances if you pass in the string "local" for the timezone (which I do to convert from UTC time to local). My script significantly sped up (by an order of magnitude or more!) once I cached the DateTime::TimeZone object corresponding to "local", and re-used that cached value. Granted, I'm running on weak hardware (an rPi), but even on a studlier system, I've measured the cost of that code (without caching), and it is significant. Since we're doing lots of these conversions (or at least I am), the cost can quickly dwarf some of the other processing. Once I started using the cached timezone instance, I'm back to a very quick turn-around for my polling code.
Also, I'm curious: are you seeing any kernel buffer overflows when using Pcap in perl? I'm curious what happens if the packet callback that Pcap calls does not return in a timely manner, are packets just dropped if the buffer overflows?
Mirroring/intercepting SunPower Monitoring Traffic?
Collapse
X
-
Ah, sorry, I thought that was referring to me. My mistake!Leave a comment:
-
Last edited by astroboy; 04-20-2016, 09:14 PM.Leave a comment:
-
I filed https://github.com/jbuehl/solaredge/issues/10 to relay your generous offer. Thanks!
I've never seen the script you mentioned. I don't like Python as a scripting language and avoid it. I wrote my PVOutput cross-posting code from scratch, based on the documentation of the restful api from PVOutput. And I wrote it in Perl.
Furthermore, like the github page owner states, my work was all about reverse-engineering the SunPower proprietary application-level protocol, and (as I've said repeatedly) is not easily integrated into the PVOutput posting process.
Furthermore, my offer to help anyone who wants to write a script was just that: an offer to provide advise based on my findings, not an offer to become a github contributor.
I'll repeat my offer: if someone needs help writing a script to parse SunPower supervisor traffic for collection of monitoring data, I'm more than happy to provide some insight from my experience (and the humongous help provided by astroboy, without which I would have still been stuck).
But the whole point of my documenting this so well in this thread was to make it fairly easy for others to replicate what I (we) did, so I'm thinking that reading this thread will suffice.Leave a comment:
-
actually, i take it back; my script is written in perl, so i never looked at that project, which seems to be written in python. it was enphase-output.pl from which i took the ~5 lines of code that post to pvoutput, which i think is linked from pvoutput.org somewhere.
anyway my script is not really based on anyone's script... it's cobbled together from example code showing how to write a Net::Pcaputils filter and packet processing function. the core of it though is simply a regular expression similar to the one posted above; pick apart the message 130 (only, since i do not have power monitoring in my system), convert the UTC time in the message to local time, set the cumulative flag and post the lifetime energy reported in message 130.Leave a comment:
-
Leave a comment:
-
Leave a comment:
-
a) spawn the sniffer, and read and store the sniffed data
b) convert sniffed data to intermediate format for local storage (for use by web server)
c) download sunpower data from site and convert to intermediate format for local storage (for use by web server, alternate view for comparison purposes)
d) upload to PVOutput.org from locally-stored intermediate storage format
Yes, this is way more decomposed than it needs to be, but decoupling these functions allowed me to mix-and-match over time, and allows a certain amount of redundancy (for example when the sunpower site went down, or when PVoutput went down, it did not perturb the other functions).
So again, it's not really in a shareable form, and I don't have the time to put it into such a form. I truly am sorry, and I would be more than happy to help someone else with their own script.Leave a comment:
-
Thanks for explaining. I'd be happy to sanitize the script for you, if you like (assuming you're willing to trust a strangerI'd keep it confidential, and just send you back the sanitized script. My site's http://kegel.com if you want to check me out.
-
Once you've got the stream of sniffed network data, filtering out the 140 and 130 entries is really easy, it's just a regex away:
Code:/^(1[34]0)\t(20[0-9]{12})\t[0-9]+\t[^\s]+\t[^\s]*\t(-?[0-9]*\.[0-9]*)\t/ whichmsg = $1 # either 130 (for production) or 140 (for net metering) utcdatetime = $2 # the date in UTC, use DateTime to convert to local time currvalue = $3 # the data value (lifetime production if whichmsg == 130, lifetime net if whichmsg == 140) production = (current production value) - (previous production value) net = (current net value) - (previous net value) consumption = production + net
My script will assume no production if I do not receive a 130 (production) message before the next set of 130/140 messages and the datetime indicates that we're outside of daylight hours (by a conservative margin); in which case I assume 0 production. Otherwise, I wait a couple cycles to see if one comes in, and if it still hasn't, I assume zero production (which does mean that my latency varies over time, based on the data that gets produced). A missing 140 (net) message is, for me currently, a "die" condition, because if I get a 130 but no 140, that means something is wrong...
That's pretty much all there is to it.Last edited by robillard; 04-17-2016, 12:04 AM.Leave a comment:
-
In this case, I was making a different point -- sharing code is a good idea, even if the code isn't going to be useful as-is, it still can be illuminating and save other folks time. Yes, people could reinvent, but being able to look at your script while they do that might save them some time.Leave a comment:
-
Originally posted by J.P.M.
I wouldn't get upset about Dan missing the point. As I recall, he doesn't claim to know much but he does claim to like science, if I correctly interpret the sense of what he once wrote on his handle, take him at his word and, IMO only, as he seems to have demonstrated many times.Leave a comment:
-
That's ok. You can start off with a hardware-specific script. Someone else can generalize it. The point is just to get a proof-of-concept out there.Leave a comment:
-
I think you're missing the point... The script will be naturally tailored to the workflow required by the hardware; mine certainly is...Leave a comment:
-
Also, this thread serves as a pretty decent write-up, both of the approach to take for scripting, and for at least an approach to the hardware setup.Leave a comment:
Leave a comment: