How I can calculate accurate energy if I have Power, Current , Voltage values
This is the code of energy calculation, the result's it's wrong so how I can fix that
I want to measure apparent energy, I don't have a problem in V , I, P values
if(millis() >= energyLastSample + 1)
{
energySampleCount = energySampleCount + 1;
energyLastSample = millis();
}
if(energySampleCount >= 1000)
{
apparent_energy_l1 = apparent_power_l1/3600.0;
finalEnergyValue_l1 = finalEnergyValue_l1 + apparent_energy_l1;
apparent_energy_l2 = apparent_power_l2/3600.0;
finalEnergyValue_l2 = finalEnergyValue_l2 + apparent_energy_l2;
apparent_energy_l3 = apparent_power_l3/3600.0;
finalEnergyValue_l3 = finalEnergyValue_l3 + apparent_energy_l3;
// Serial.print(finalEnergyValue,2);
// Serial.println("test");
energySampleCount = 0 ;
}
energy_total= finalEnergyValue_l1+finalEnergyValue_l2+finalEnergyValue_l3;
}
Some tips about power calculation using Arduino or any microcontroller,
open-source code or project,
guidelines to solve my problem
Note that energy (W x t) is a measurement of power over time, while power is a measurement of work, meaning that you cannot simply divide power by 3600 (which would be the factor to convert from seconds to hours) to get an energy value. Power (W) is a measurement of how much work for example a device is currently doing. If you want to calculate the Energy consumed by a device, you will have to continuously measure the Power, for example in 1s intervals, and add it to a counter. Then you have a value which represents Ws - Watt seconds. You can then calculate the Wh consumed from that value.
Example:
You have a device which consumes 300W of power. You keep that device running for exactly 3 hours. If you measure the power consumption every second as described, you will have measured 3240000 Ws. 3240000 Ws / 3600 = 900Wh / 1000 = 0,9 kWh. You can of course change your measurement interval to fit your needs in regard to accuracy.
Pseudocode:
if ( millis() >= lastmillis + 1000 )
{
lastmillis = millis();
wattseconds = wattseconds + power; #increment energy counter by current power
kilowatthours = wattseconds / 3600000;
print(kilowatthours)
}
You could of course use a one second interrupt with an external RTC to get a more accurate timing.
Related
Let N be the number of stations in the ring, THT the token holding time, Tt be the transmission time of packet, Tp be the propagation time of packet on Channel/ Link.
Then Cycle Time = N * THT + Tp (this is cycle time for token)
and efficiency = (useful time)/(Cycle Time)
Here useful time is stated as N * Tt. (justified as transmission time at each station in single cycle of token passing)
And thus proven efficiency = (N * Tt)/(N*THT + Tp)
Now THT depends on what strategy we are using. If I am using delayed token ring then only one station transmits the data and other station doesn't transmit the data, but everywhere showing in useful time Tt multiplied by N.In this case, THT = Tt + Tp
So, cycle time = Tp + N*(Tt + Tp)
Efficiency, e = (NTt)/(Tp + N(Tt + Tp)).
My question is why we multiplied Tt by N inspite of one transmits the data?
Here in delayed token ring N=1 since only one station transmitting the data so, Efficiency, e = Tt/Tp + Tt + Tp=Tt/2Tp + Tt .
I am using a sliding window to extract information from my EEG data with a FFT. Now I want to predict the signal from my window into the next one. So I extract the phase from a 0.25 second time window to predict for the next 0.25 second long window.
I am new to signal-processing/prediction, so my knowledge here is a little rusty.
I am not able to generate a sine wave with my extracted phase and frequency. I am just not finding a solution. I might just need a push into the right direction, who knows.
Is there a function in R to help me generate a suitable sine wave?
So I have my maximum Frequency with the phase extracted and need to generate a wave with this information.
here is pseudo-code to synthesize a sin curve of chosen frequency ... currently it assumes an initial seed phase shift of zero so just alter the theta value if you need a different initial phase shift
func pop_audio_buffer(number_of_samples float64, given_freq float64,
samples_per_second float64) ([]float64, error) {
// output sinusoidal curve is assured to both start and stop at the zero cross over threshold,
// independent of supplied input parms which control samples per cycle and buffer size.
// This avoids that "pop" which otherwise happens when rendering audio curves
// which begins at say 0.5 of a possible range -1 to 0 to +1
int_number_of_samples := int(number_of_samples)
if int_number_of_samples == 0 {
panic("ERROR - seeing 0 number_of_samples in pop_audio_buffer ... float number_of_samples " +
FloatToString(number_of_samples) + " is your desired_num_seconds too small ? " +
" or maybe too low value of sample rate")
}
source_buffer := make([]float64, int_number_of_samples)
incr_theta := (2.0 * math.Pi * given_freq) / samples_per_second
theta := 0.0
for curr_sample := 0; curr_sample < int_number_of_samples; curr_sample++ {
source_buffer[curr_sample] = math.Sin(theta)
theta += incr_theta
}
return source_buffer, nil
} // pop_audio_buffer
supposed that the measurement RSS is "-70dBm" and the predicted RSS is "-68dBm, the transmission power of antenna is "-12dBm",
then if the following equation is right? if not, how to calculate it?
Error = |10 * log10 (70/12) - 10 * log10 (68/12)| = 10 * log10 (70/68)
now my measurement is the RSS in dBm, how to convert it into dB?
This often confuses folks in my experience, and as such warrants a thorough explanation.
The "m" in "dBm" means relative to 1 milliwatt. It is typically used for absolute measurements, whereas "regular" dBs are typically used for power gains/losses/diffs.
Example:
in/tx out/rx
1w .5w
(1 milliwatt = 0.001)
10log(1/0.001) = 30dBm
10log(0.5/0.0001) = 27dBm
loss = 3dB
10log(1) = 0dB
10log(0.5) = 3dB
loss still is 3dB
(note there is an implied /1w here since the argument to log must be unit-less, e.g. 0.5w/1w = 0.5 "flat" (aka no units))
So in the context of power differences, the m does not matter.
Things to note:
1/2 of power lost == -3dB gain (or +3dB loss)
power gains/losses in series are added/subtracted when in dBs -vs- multipled/divided when in watts
0 watts == -infinity dB
0 dBm == 1 milliwatt
log here is base 10 (not 2 nor e)
reletiveGainOrLoss = 10^(valueOfGainOrLossInDb/10)
valueOfPowerInMilliwatts = 10^(valueOfPowerInDbm/10)
In your example, I'll presume by error you mean the error of the predicted loss relative measured loss:
predicted loss =
known transmission power - predicted RSS =
-12dBm - -68dBm =
56dB
measured loss =
known transmission power - measured RSS =
-12dBm - -70dBm =
58dB
error of predicted loss relative to measured loss =
|predicted loss - measured loss| =
|56dB - 58dB| =
2dB (==2dBms, but for diffs we should drop the m) =
Or more directly: 70 - 68 (so easy with dBs!)
This equates to a 63% error (or 58, depending on how it is done):
10^(-2dB/10) =
0.63
(10^(2dB/10) =
1.584893192461113
In millwatts:
10^(valueInDbm/10) =
10^(-70/10) = 0.0000001 milliwatts
10^(-68/10) = 0.000000158489319 milliwatts
10^(-12/10) = 0.063095734448019 milliwatts
As a sanity check:
(0.0000001/0.063095734448019 - 0.000000158489319/0.063095734448019) / (0.0000001/0.063095734448019) =
(0.000001584893192 - 0.000002511886428) / 0.000001584893192 =
-0.000000926993236 / 0.000001584893192 =
-0.584893190707832
(note that doing it in watts is much more laborious! (not to even mention float errors))
To answer your other question regarding:
Error = |10 * log10 (70/12) - 10 * log10 (68/12)| = 10 * log10 (70/68)
The first equation is nonsensical; as discussed above: for dBs we add/subtract -vs- multiply/divide. The second equation is however true, based on one of the rules of logs:
log a + log b = log ab
When the RSS in dBm, the path loss is equal to the difference between the transmission RSS and received RSS, the unit of path loss in this case is dB.
I am more of a novice in R and have been trying to built a formula to price american type options (call or put) using a simple Monte Carlo Simulation (no regressions etc.). While the code works well for European Type Options, it appears to overvalue american type options (in comparision to Binomial-/Trinomial Trees and other pricing models).
I would greatly appreciate your input!
The steps I take are outlined below.
1.) Simulate n stock price paths with m+1 steps (Geometric Brownian Motion):
n = 10000; m = 100; T = 5; S = 100; X = 100; r = 0.1; v = 0.1; d = 0
pat = matrix(NA,n,m+1)
pat[,1] = S
dt = T/m
for(i in 1:n)
{
for (j in seq(2,m+1))
{
pat[i,j] = pat[i,j-1] + pat[i,j-1]*((r-d)* dt + v*sqrt(dt)*rnorm(1))
}
}
2.) I calculate the payoff matrix for call options and put options and discount both via backwards induction:
# Put option
payP = matrix(NA,n,m+1)
payP[,m+1] = pmax(X-pat[,m+1],0)
for (j in seq(m,1)){
payP[,j] = pmax(X-pat[,j],payP[,j+1]*exp(-r*dt))
}
# Call option
payC = matrix(NA,n,m+1)
payC[,m+1] = pmax(pat[,m+1]-X,0)
for (j in seq(m,1)){
payC[,j] = pmax(pat[,j]-X,payC[,j+1]*exp(-r*dt))
}
3.) I calculate the Option Price as the average (mean) payoff at time 0:
mean(payC[,1])
mean(payP[,1])
In the example above, a call price of approximately 44.83 and an approximate put price of 3.49 is found. However, following a trinomial tree approach (n = 250 steps), prices should more 39.42 (call) and 1.75 (put).
Black Scholes Call Price (since no dividend yield) is 39.42.
As I said, any input is highly appreciated. Thank you very much in advance!
All the bests!
I think your problem is rather a conceptual one than an actual coding problem.
What your code currently does is that it takes the in hindsight best point in time to exercise the American option over the whole simulated stock price path. It does not take into account that once the intrinsic value of an American option is higher than its calculated option price, you exercise it - which means, that you forego the chance to exercise it in the future where the difference between the intrinsic value and option price might be even larger (depending on the realized stock price movements).
Hence, you overestimate the option prices.
I have this problem where I need to compute a continuous exponential moving average of a value in a discrete data stream. It's impossible to predict when I will receive the next sample, but EMA formulas expect the amount of time between each sample of data to be equal.
I found this article with a demonstration of how to work around this:
double exponentialMovingAverageIrregular( double alpha,
double sample,
double prevSample,
double deltaTime,
double emaPrev
)
{
double a = deltaTime / ( 1 - alpha );
double u = exp( a * -1 ); // e^(-a)
double v = ( 1 - u ) / a;
double emaNext = ( emaPrev * u )
+ ( prevSample * ( v - u ) )
+ ( sample * ( 1 - v ) );
return emaNext;
}
I compute alpha by using the following formula: 2 / (period + 1) where period is the number of milliseconds I want my EMA to pay attention to.
When I use this, the EMA moves way too quickly. I could have a 30 minute window that takes only two or three samples for the EMA to equal the input.
Here are some things I could be doing wrong:
I use milliseconds for computing alpha because that's the resolution of the timestamps on my input
I use milliseconds for deltaTime because that's what everything else is using
Per the suggestion of commenters on the article, I use a = deltaTime / (a - alpha) instead of a = deltaTime / alpha. Neither fixes the problem, but the latter causes more problems.
Here is a contrived example in which all the samples are exactly one minute apart. When computing alpha, I used 11 * 60 * 1000, or 11 minutes, leaving me with alpha = 0.0000030302984389417593. Notice how each ema has followed the sample almost exactly. This is not supposed to happen with an 11 minute window.
sample 10766.26, ema 10766.260001166664, time 1518991800000
sample 10750.75, ema 10750.750258499216, time 1518991860000
sample 10750.76, ema 10750.759999833333, time 1518991920000
sample 10750.75, ema 10750.750000166665, time 1518991980000
sample 10750.76, ema 10750.759999833333, time 1518992040000
sample 10750.76, ema 10750.759999999998, time 1518992100000
sample 10750.76, ema 10750.759999999998, time 1518992160000
sample 10750, ema 10750.000012666627, time 1518992220000
sample 10719.99, ema 10719.990500165151, time 1518992280000
sample 10720, ema 10719.999999833333, time 1518992340000
sample 10719.99, ema 10719.990000166667, time 1518992400000
sample 10719.99, ema 10719.99, time 1518992460000
sample 10709.27, ema 10709.270178666126, time 1518992520000
sample 10690.26, ema 10690.260316832373, time 1518992580000
sample 10690.27, ema 10690.269999833334, time 1518992640000
sample 10690.27, ema 10690.27, time 1518992700000
sample 10695, ema 10694.999921166906, time 1518992760000
sample 10699.98, ema 10699.979917000252, time 1518992820000
sample 10702.05, ema 10702.049965500104, time 1518992880000
sample 10744.99, ema 10744.989284335501, time 1518992940000
sample 10744.12, ema 10744.120014499955, time 1518993000000
The way the function was derived was not explained, and I didn't pay attention in math class. Any pointers would be greatly appreciated.
You Get Exactly What You've Defined:
given the way you defined alpha, the rest is a causal-chain:
|>>> a = 60000 / 0.999997
|>>> u = exp( -a )
|>>> v = ( 1 - u ) / a
|>>> u, ( v - u ), ( 1 - v )
( 0.0, 1.6666616666666667e-05, 0.99998333338333334 )
thus a
return ( ( emaPrev * u ) // -> 0. * emaPrev
+ ( prevSample * ( v - u ) ) // -> 0.000016 * prevSample
+ ( sample * ( 1 - v ) ) // -> 0.999983 * sample
); // ~= sample
returns nothing much different from the sample ( all the powers of the smoothing effect has been efficiently short-cut off the wannabe-smoothing-filter )
There are different motivations in different fields of use of the signal-filtering / smoothing. Strategies that may work fine in domains of mass-bound models for noisy sensor readouts, need not meet your expectations in other domains, like quant-modelling in trading and other domains that enjoy mass-less or otherwise absent products of inertia for processes and similar principal discontinuities of the subject of the study phenomena.
Out of question, it is worth spending some time both on math and on quant subjects of the study, both of these help you a lot in future work.