I am in the Go environment.
I am looking for a cross platform library to use to generate my two formulas in python or F # or matlab, ...
I need to generate a mathematical formula based on two references
The manufacturer indicates that the value of a sensor is coded on a byte and is between 0 and 255.
The minimum = 0 and has the representation value -60dB
The maximum = 255 and has a representation value of +20dB
I must now generate two formulas:
RX: a mathematical formula allowing me to interpret the value coming from the sensor in value of representation in dB.
TX: the inverse of RX ie a mathematical formula allowing me to convert the value of representation in dB in value of representation of the sensor.
If you have a idea it is welcome
Youssef
I am assuming you need a linear relationship, so you can use the following code:
INPUT_MIN = 0
INPUT_MAX = 255
OUTPUT_MIN = -60
OUTPUT_MAX = 20
SLOPE = (OUTPUT_MAX - OUTPUT_MIN) / (INPUT_MAX - INPUT_MIN)
def rx(sensor_input):
return SLOPE * (sensor_input - INPUT_MIN) + OUTPUT_MIN
def tx(dbs):
return (dbs - OUTPUT_MIN) / SLOPE + INPUT_MIN
What you have to do is to find the equation of the line given those two points. There are many tutorials online about it like this one.
Once you have found the equation in which y would be the variable that represents your output, and x represent your input, you need to find x in terms of y. Finally, you just implement both functions.
Note that I haven't limited the input, so in case you want restricted input values, I encourage you to add some conditionals in the functions.
In python using numpy:
import numpy as np
def RX(input_val):
# use linspace to create lookup table
lookup_array = np.linspace(-60,20,255)
return lookup_array(int(input_val))
def TX(decibal_value):
# use linspace to create lookup table
lookup_array = np.linspace(-60,20,255)
# find the index closest to decibal value by searching for the smallest difference
index = (np.abs(spectum - decibal_value)).argmin()
return index
Related
Suppose I have an arbitrary probability distribution function (PDF) defined as a function f, for example:
using Random, Distributions
#// PDF with parameter θ ϵ (0,1), uniform over
#// segments [-1,0) and [0,1], zero elsewhere
f = θ -> x ->
(-1 <= x < 0) ? θ^2 :
(0 <= x <= 1) ? 1-θ^2 :
0
How can I sample values from a random variable with this PDF in Julia? (or alternatively, how can I at least simulate sampling from such a random variable?)
i.e. I want the equivalent of rand(Normal(),10) for 10 values from a (standard) normal disribution, but I want to use the function f to define the distribution used (something like rand(f(0.4),10) - but this doesn't work)
(This is already an answer for discrete distributions at How can I write an arbitrary discrete distribution in Julia?: however I'm wanting to use a continuous distribution. There are some details of creating a sampler at https://juliastats.org/Distributions.jl/v0.14/extends.html which I think might be useful, but I don't understand how to apply them. Also in R I've used the inverse CDF technique as described at https://blogs.sas.com/content/iml/2013/07/22/the-inverse-cdf-method.html for simulating such random variables, but am unsure how it might best be implemented in Julia.)
First problem is that what you've provided is not a complete specification of a probability distribution since it doesn't say anything about the distribution within the interval [-1, 0) or within the interval [0, 1]. So for the purposes of this answer I'm going to assume your probability distribution function is uniform on each of these intervals. Given this, then I would argue the most Julian way to implement your own distribution would be to create a new subtype, in this case, of ContinuousUnivariateDistribution. Example code follows:
using Distributions
struct MyDistribution <: ContinuousUnivariateDistribution
theta::Float64
function MyDistribution(theta::Float64)
!(0 <= theta <= 1) && error("Invalid theta: $(theta)")
new(theta)
end
end
function Distributions.rand(d::MyDistribution)::Float64
if rand() < d.theta^2
x = rand() - 1
else
x = rand()
end
return x
end
function Distributions.quantile(d::MyDistribution, p::Real)::Float64
!(0 <= p <= 1) && error("Invalid probability input: $(p)")
if p < d.theta^2
x = -1.0 + (p / d.theta^2)
else
x = (p - d.theta^2) / (1 - d.theta^2)
end
return x
end
In the above code I have implemented a rand and quantile method for the new distribution which is the minimum to be able to make function calls like rand(MyDistribution(0.4), 20) to sample 20 random numbers from your new distribution. See here for a list of other methods you may want to add to your new distribution type (depending on your use-case perhaps you won't bother).
Note, if efficiency is an issue, you may look into some of the methods that will allow you to minimise the number of d.theta^2 operations, e.g. Distributions.sampler. Alternatively, you could just store theta^2 internally in MyDistribution but always display the underlying theta. Up to you really.
Finally, you don't really need type annotations on function outputs. I've just included them for clarity.
I am trying to compound 2 CRC polynomials. I have a message for which I produce a CRC using one polynomial. I then CRC the result of the first CRC in order to obtain a second result. Is there any possible way to do this in one go?
Example:
Given the message 0xC and the polynomial 0x17 I compute the CRC which is 0xA. I then take this result and another polynomial 0x13 and compute the CRC again which produces the result 0xD. I am trying to derive a new polynomial which given the message 0xC will produce the result 0xD.
I have only tried to work this on paper so I do not have any code but some code should look like this:
def CRC(message, poly):
#CRC implementation
a = CRC(0xC, 0x17)
#The value of a right now would be 0xA
b = CRC(a, 0x13)
#The value of b is 0xD right now
I am trying to obtain the same result using my initial message and one single function call
b = CRC(0xC, ???)
#I would want the value of b after this call to be 0xD
It seems like a dumb request but I find it helpful.
I have tried applying simple math specifically The quotient remainder theorem but I find multiplying in finite fields to be overly complex.
I misunderstood the question in my original answer. I'm assuming this is a single nibble message, since the second CRC only has a single nibble input, the CRC from 0x17. This could be implemented using a table with 16 entries. Using n to represent the nibble, using carryless and borrowless binary math, and hex numbers:
crc = (((n*10)%17)*10)%13 = (((n*7)%17)*3)%13
I'm wondering if instead, the goal here is to work with a message of more than one nibble. For example, say the message is {x y z}, then the encoded message would be
{x y z crc17{x y z} crc13{x y z}}
or
{x y z crc17{x y z} crc13{x y z crc17{x y z}}
i want to calculate sqrt and arctangent in javacard. i haven't any math lib to do this for me and i haven't float type to calculate it manually. I have some questions in my mind:
1- Can i use float number in byte array form and working on it? how?
2- Usually how these operations is calculated in javacard?
I found some links but i couldn't help me:
http://stackoverflow.com/questions/15363244/math-library-for-javacard
http://javacardos.com/javacardforum/viewtopic.php?t=437
I should mention that i have to calculate these operation on card. Thank you very much if anyone can help me.
The integer square root can be computed by the Babylonian method, if integer division is available.
Just iterate
R' = (R + S / R) / 2
with a suitable initial R.
Such a value can be found with
R= 1
while S > 2:
R*= 2
S/= 4
(preferably implemented with shifts, if available).
You can stop the iterations when the value of R stabilizes (you can also determine a priori a constant number of iterations that yields sufficient accuracy).
The idea for CORDIC in the computation of atan is to have a table of values
angle[i] = atan(pow(2,-i));
It does not matter if the angles are precomputed in radians or degrees. Then use the tangent addition theorem
tan(a+b)=(tan(a)+tan(b) ) / ( 1-tan(a)*tan(b) )
to successively reduce the given tangent value
tan(x) {
if(x<0) return -atan(-x);
if(x>1) return 2*angle[0]-atan(1/x);
pow2=1.0;
phi=0;
for(i=0;i<10; i++) {
if(x>pow2) {
phi += angle[i];
x = (x-pow2)/(1+pow2*x);
}
pow2 /= 2;
}
return phi+x;
Now one needs to translate these operations and constants into using some kind of fixed point format.
I have the following data: a vector B and a vector R. The vector B is the "independent" variable. For this pair, I have two data sets: One is an experimental measurement of Bex, Rex and the other is a simulation produced by me Bsim, Rsim. The simulation does not have any "scale" for the x-axis (the B vector). Therefore when I am trying to fit my curve to the experiment, I have to find out a scaling parameter B0 "by eye", and with this number B0 I multiply the entire Bsim vector and simply plot(Bsim, Rsim, Bex, Rex).
I wanted to use the package LsqFit to make the procedure automatic and more accurate. However I am having trouble in understanding how I could use it to find the scaling on the independent variable.
My first thought was to just "invert" the roles of B and R. However, there are two issues that I think make matters worse: 1) the R curve/data is not monotonous, 2) the experimental data are much more "dense" (they have more data-points: my simulation has 120 points in total, the experiments have some thousands).
Below I give an example if what I am trying to accomplish (of course, the answer need not use LsqFit). I also attach two figures that demonstrate everything very clearly.
#= stuff happened before this point =#
Bsim, Rsim = load(simulation)
Bex, Rex = load(experiment)
#this is what I want to do:
some_model(x, p) = ???
fit = curve_fit(some_model, Bex, Rex, [3.5])
B0 = fit.param[1]
#this is what I currently do by trail and error:
B0 = 3.85 #this is what I currently do by trial and error
plot(B0*Bsim, Rsim, Bex, Rex)
P.S.: The R curves (dependent variables) are both normalized by their maximum value because their scaling is not important.
A simple approach iff you can always expect both your experiment and simulation to feature one high peak, and you're sure that there's only a scaling factor rather than also an offset, is to simply multiply your Bsim vector by mode_rex / mode_rsim (e.g. in your example, mode_rsim = 1, and mode_rex = 4, so multiply Bsim by 4. But I'm sure you've thought of this already.
For a more general approach, one way is as follows:
add and load Interpolations package
Create a grid to interpolate over, e.g. Grid = 0:0.01:Bex[end]
interpolate Rex over that grid, e.g.
RexInterp = interpolate( (Bex,), Rex, Gridded(Linear()));
RexGridVec = RexInterp[Grid];
interpolate Rsim over the same grid, but introduce your multiplier on the Bsim "knots", e.g.
Multiplier = 0.1;
RsimInterp = interpolate( (Multiplier * Bsim,), Rsim, Gridded(Linear()));
RsimGridVec = RsimInterp[Grid]
Now you can calculate a square error value between RsimGridVec and RexGridVec, e.g.
SqErr = sum((RsimGridVec - RexGridVec).^2)
If you follow this technique, then if you create a loop for a multiplier range (say 0:0.01:10), and get the square error associated with each multiplier, you can find out the multiplier for which the square error is the minimum.
In theory if you wanted to find the optimal for a particular offset too, you can make it the outer loop for a range of offsets. Mind you this is a brute force approach, but it be reasonably efficient judging by the vectors in your graph.
I am testing a temperature sensor for a project. i found that there exist a variance between the expected and measured value. As the difference is non -linear over e temperature range i cant simply add an offset . Is there a way i can do a kind of offset to the acquired data ?
UPDATE
I have a commercial heater element which heat up to a set temperature(i named this temperature as expected). On the other side i have a temp sensor (my proj)which measure the temperature of the heater (here i named it as measured).
I noticed the difference between the measured and expected which i would like to compensate so that measured will be close to the expected value.
Example
If my sensor measured 73.3 it should be process by some means(mathematically or otherwise)so that it will show that it is close to 70.25.
Hope this clears thing a little.
Measured Expected
30.5 30.15
41.4 40.29
52.2 50.31
62.8 60.79
73.3 70.28
83 79.7
94 90.39
104.3 99.97
114.8 109.81
Thank you for your time.
You are interested in describing deviation one variable from the other. What you are looking for is function
g( x) = f( x) - x
which returns approximation, a prediction, what number to add to x to get y data based on real x input. You need the prediction of y based on observed x values first, the f(x). This is what you can get from doing a regression:
x = MeasuredExpected ( what you have estimated, and I assume
you will know this value)
y = MeasuredReal ( what have been actually observed instead of x)
f( x) = MeasuredReal( estimated) = alfa*x + beta + e
In the simplest case of just one variable you don't even have to include special tools for this. The coefficients of equation are equal to:
alfa = covariance( MeasuredExpected, MeasuredReal) / variance( MeasuredExpected)
beta = average( MeasuredReal) - alfa * average( MeasuredExpected)
so for each expected measured x you can now state that the most probable value of real measured is:
f( x) = MeasuredReal( expected) = alfa*x + beta (under assumption that error
is normally distributed, iid)
So you have to add
g( x) = f( x) - x = ( alfa -1)*x + beta
to account for the difference that you have observed between your usual Expected and Measured.
Maybe you could use a data sample in order to do a regression analysis on the variation and use the regression function as an offset function.
http://en.wikipedia.org/wiki/Regression_analysis
You can create a calibration lookup table (LUT).
The error in the sensor reading is not linear over the entire range of the sensor, but you can divide the range up into a number of sub-ranges for which the error within the sub-range is nearly linear. Then you calibrate the sensor by taking a reading in each sub-range and calculating the offset error for each sub-range. Store the offset for each sub-range in an array to create a calibration lookup table.
Once the calibration table is known, you can correct a measurement by performing a table lookup for the proper offset. Use the actual measured value to determine the index into the array from which to get the proper offset.
The sub-ranges don't need to be same-sized although that should make it easy to calculate the proper table index for any measurement. (If the sub-ranges are not same-sized then you could use a multidimensional array (matrix) and store not only the offset but also the beginning or end point of each sub-range. Then you would scan through the begin-points to determine the proper table index for any measurement.)
You can make the correction more accurate by dividing into smaller sub-ranges and creating a larger calibration lookup table. Or you may be able to interpolate between two table entries to get a more accurate offset.