SSRS How to test for null - ssrs-2017

I have a calculated fields that I need to find the difference of the numerator and denominator, and it is generating a #Error or #Value. I was testing for NULL (or > 0 value), but it is not working (see code below). So if the numerator and denominator is null the difference should be null.
=IIF(Fields!Abandoned.Value > 0, Fields!Abandoned.Value / Fields!Offered.Value,"") - iif(Fields!Abandoned.Value > 0, (Fields!Abandoned.Value + fields!Abandoned_ring.Value) / (Fields!Offered.Value+Fields!Abandoned_ring.Value),"")
=iif(Fields!SVLNumerator.Value >0,Fields!SVLNumerator.Value /FIelds!Offered.Value,"") - iif(Fields!SVLNumerator.Value >0,Fields!SVLNumerator.Value / (Fields!Offered.Value + Fields!Abandoned_ring.Value),"")

Related

Checking for a counterexample to a conjecture on even deficient-perfect numbers using Pari-GP

I am trying to check for counterexamples to the conjecture stated in this MSE question, using the Pari-GP interpreter of Sage Cell Server.
I reproduce the statement of the conjecture here: If N > 8 is an even deficient-perfect number and Q = N/(2N - sigma(N)), then Q is prime.
Here, sigma(N) is the classical sum of divisors of N.
I am using the following code:
for(x=9, 1000, if(((Mod(x,(2*x - sigma(x))) == 0)) && ((fromdigits(Vecrev(digits(x / (2*x - sigma(x)))))) == (x / (2*x - sigma(x)))) && !(isprime((x / (2*x - sigma(x))))), print(x,factor(x))))
However, the Pari-GP interpreter of Sage Cell Server would not accept it, and instead gives the following error message:
*** at top-level: for(x=9,1000,if(((Mod(x,(2*x-sigma(x)))==0))&&
*** ^----------------------------
*** Mod: impossible inverse in %: 0.
What am I doing wrong?
Here's a better implementation of your algorithm
{
forfactored(X = 9, 10^7,
my (s = sigma(X), t = 2*X[1] - s);
if (t <= 0, next);
my ([q, r] = divrem(X[1], t));
if (r == 0 && fromdigits(Vecrev(digits(q))) == q && !ispseudoprime(q),
print(X)))
}
It's a bit more readable but most importantly it avoids factoring the same x over and over again: each time you write sigma(x), we need to factor x (the interpreter is not clever enough to compute subexpressions once). In fact, it doesn't perform a single factorization, through the use of forfactored which performs a sieve instead (and the variable X contains [x, factor(x)]). This is about 3 times faster than the original implementation in this range.
I let it run to 10^9 (about 10 minutes), there was no further counterexample.
I got it to work myself.
Here is the code that I used:
for(x=9, 10000000, if((2*x > sigma(x)) && ((Mod(x,(2*x - sigma(x))) == 0)) && ((fromdigits(Vecrev(digits(x / (2*x - sigma(x)))))) == (x / (2*x - sigma(x)))) && !(isprime((x / (2*x - sigma(x))))), print(x,factor(x))))
The search returns the odd counterexample N = 9018009, which is expected.
It did not return any even counterexamples, in the specified range.

sympy: rootfinding of parameterized function in range

I'm having trouble to find the root of a parameterized quintic polynome. Background is: I want to find the parameter s_f such that for any given parameter d_f, the curvature of the polynomial is smaller than a threshold (yeah .. sounds complex, but the math is rather straight);
# define quintic polynomial (jerk-minimized trajectory)
# see http://courses.shadmehrlab.org/Shortcourse/minimumjerk.pdf
s = symbols('s', real=True, positive=True)
s_f = symbols('s_f', real=True, positive=True, nonzero=True)
d_0 = 0
d_f = symbols('d_f', real=True, positive=True, nonzero=True)
d_of_s = d_0 + (d_f - d_0) * ( 10*(s/s_f)**3 - 15*(s/s_f)**4 + 6*(s/s_f)**5 )
display(d_of_s)
# define curvature of d_of_s
# see https://en.wikipedia.org/wiki/Curvature#In_terms_of_a_general_parametrization
y = d_of_s
dy = diff(d_of_s, s)
ddy = diff(dy, s)
x = s
dx = diff(x, s) # evaluates to 1
ddx = diff(dx, s) # evaluates to 0
k = (dx*ddy - dy*ddx) / ((dx*dx + dy*dy)**Rational(3,2))
# the goal is to find s_f for any given d_f, such that k(s) < some_threshold
# strategy: find the roots of the derivative of k in the range of sāˆˆ[0, s_f]
dk = diff(k, s)
dk = simplify(dk)
display(dk)
# now solve
res = solveset(dk, s, Interval(0, s_f).intersection(S.Reals))
display(res)
The function dk(s, d_f, s_f) has two root in the interval sāˆˆ[0, s_f], however solveset returns this:
ConditionSet(s, Eq(5400*d_f**2*s**4*(-s**2 + 2*s*s_f - s_f**2)*(2*s**2 - 3*s*s_f + s_f**2)**2 + (900*d_f**2*s**4*(s**2 - 2*s*s_f + s_f**2)**2 + s_f**10)*(6*s**2 - 6*s*s_f + s_f**2), 0), Interval(0, s_f))
.. which is afaik equivalent to: I can't solve this, we got infinite number of results. Well, this is true for the function in general. limit(dk, s, -oo) and limit(dk, s, +oo) is zero. But since I stated the domain interval, why am I not getting the two roots I'm expecting? I'd also expect to get a more granular result:
set containing the roots for s < 0;
set containing the roots for s > s_f
the two roots when sāˆˆ[0, s_f]
I started with solve() and a lot of different assumptions on my symbols. I get different results for different assumptions, but no combination seems to yield what I need. When I state no assumptions, I get back a set with a huge condition and 8 roots, that don't seem real or correct. In general, the constraints are:
- all symbols are real
- s_f > 0
- d_f > 0
- s āˆˆ [0, s_f] (domain range .. the polynomial is only evaluated in this interval)
I guess the problem is that I'm not setting up my solveset correctly:
how to specify that s_f and d_f are real? afaik the symbol
assumptions are ignored when using solveset?
how to specify intervals
and assumptions on other multivariate functions, i.e. other symbols
than the domain one?
This is what d_of_s look like for s_f = 1, d_f = 1
And this is what dk(s) looks like (I plotted outside the domain range to visualize the problem).
Substitute in the known values of d_f and s_f and use real_roots to find the real roots of the numerator of dk. Keep the ones that have a value in the range of interest:
>>> s_fi = 3
>>> [i for i in real_roots(dk.subs(d_f,2).subs(s_f, s_fi).as_numer_denom()[0])
... if 0 <= i.n(2) <= s_fi]
[CRootOf(800*s**10 - 12000*s**9 + 74000*s**8 - 240000*s**7 + 432000*s**6 - 410400*s**5
+ 162000*s**4 - 4374*s**2 + 13122*s - 6561, 1), CRootOf(800*s**10 - 12000*s**9 +
74000*s**8 - 240000*s**7 + 432000*s**6 - 410400*s**5 + 162000*s**4 - 4374*s**2 +
13122*s - 6561, 2)]
I kept the CRootOf instances because they can be computed to arbitrary precision, e.g. to 3 digits:
>>> [i.n(3) for i in _]
[0.433, 2.57]

Numerically stable evaluation of sqrt(x+a) - sqrt(x)

Is there an elegant way of numerically stable evaluating the following expression for the full parameter range x,a >= 0?
f(x,a) = sqrt(x+a) - sqrt(x)
Also is there any programming language or library that does provide this kind of function? If yes, under what name? I have no specific problem using the above expression right now, but encountered it many times in the past and always thought that this problem must have been solved before!
Yes, there is! Provided that at least one of x and a is positive, you can use:
f(x, a) = a / (sqrt(x + a) + sqrt(x))
which is perfectly numerically stable, but hardly worth a library function in its own right. Of course, when x = a = 0, the result should be 0.
Explanation: sqrt(x + a) - sqrt(x) is equal to (sqrt(x + a) - sqrt(x)) * (sqrt(x + a) + sqrt(x)) / (sqrt(x + a) + sqrt(x)). Now multiply the first two terms to get sqrt(x+a)^2 - sqrt(x)^2, which simplifies to a.
Here's an example demonstrating the stability: the troublesome case for the original expression is where x + a and x are very close in value (or equivalently when a is much smaller in magnitude than x). For example, if x = 1 and a is small, we know from a Taylor expansion around 1 that sqrt(1 + a) should be 1 + a/2 - a^2/8 + O(a^3), so sqrt(1 + a) - sqrt(1) should be close to a/2 - a^2/8. Let's try that for a particular choice of small a. Here's the original function (written in Python, in this case, but you can treat it as pseudocode):
def f(x, a):
return sqrt(x + a) - sqrt(x)
and here's the stable version:
def g(x, a):
if a == 0:
return 0.0
else:
return a / ((sqrt(x + a) + sqrt(x))
Now let's see what we get with x = 1 and a = 2e-10:
>>> a = 2e-10
>>> f(1, a)
1.000000082740371e-10
>>> g(1, a)
9.999999999500001e-11
The value we should have got is (up to machine accuracy): a/2 - a^2/8 - for this particular a, the cubic and higher order terms are insignificant in the context of IEEE 754 double-precision floats, which only provide around 16 decimal digits of precision. Let's compute that value for comparison:
>>> a/2 - a**2/8
9.999999999500001e-11

Normalizing constant for beta distribution with discrete prior : R code query

I am currently going through Bayesian Thinking with R by Jim Albert. I have a query about his code for his example with a beta likelihood and discrete prior. His code for calculating the posterior is:
pdisc <- function (p, prior, data)
s = data[1] # successes
f = data[2] # failures
#############
p1 = p + 0.5 * (p == 0) - 0.5 * (p == 1)
like = s * log(p1) + f * log(1 - p1)
like = like * (p > 0) * (p < 1) - 999 * ((p == 0) * (s >
0) + (p == 1) * (f > 0))
like = exp(like - max(like))
#############
product = like * prior
post = product/sum(product)
return(post)
}
My query is about the highlighted bit of code for calculating the likelihood and what the logic behind it is (not explained in the book). I'm aware of the pdf for the beta distribution, and that the log likelihood will be proportional to s * log(p1) + f * log(1 - p1) but it is not clear what the following 2 lines are doing - I imagine it's something to do with the normalizing constant, but again there isn't an explanation for this in the book.
The line
like = like * (p > 0) * (p < 1) - 999 * ((p == 0) * (s >
0) + (p == 1) * (f > 0))
takes care of the edge cases when you have prior probability at 0 or 1. Basically, if p=0 and any successes are observed then like=-999 and if p=1 and any failures are observed then like=-999. I would have preferred to use -Inf rather than -999 as that is what the log likelihood is in those cases.
The second line
like = exp(like - max(like))
is a numerically stable way to exponentiate when only the relative differences in the logged values are important. If like were really small, e.g. you had lots of successes and failures, then it is possible that exp(like) would be represented as a 0 vector in a computer. Only the relative differences are important here because you renormalize the product to sum to 1 when constructing the posterior probabilities.

translating matlab script to R

I've just been working though converting some MATLAB scripts to work in R, however having never used MATLAB in my life, and not exactly being an expert on R I'm having some trouble.
Edit: It's a script I was given designed to correct temperature measurements for lag generated by insulation mass effects. My understanding is that It looks at the rate of change of the temperature and attempts to adjust for errors generated by the response time of the sensor. Unfortunately there is no literature available to me to give me an indication of the numbers i am expecting from the function, and the only way to find out will be to experimentally test it at a later date.
the original script:
function [Tc, dT] = CTD_TempTimelagCorrection(T0,Tau,t)
N1 = Tau/t;
Tc = T0;
N = 3;
for j=ceil(N/2):numel(T0)-ceil(N/2)
A = nan(N,1);
# Compute weights
for k=1:N
A(k) = (1/N) + N1 * ((12*k - (6*(N+1))) / (N*(N^2 - 1)));
end
A = A./sum(A);
# Verify unity
if sum(A) ~= 1
disp('Error: Sum of weights is not unity');
end
Comp = nan(N,1);
# Compute components
for k=1:N
Comp(k) = A(k)*T0(j - (ceil(N/2)) + k);
end
Tc(j) = sum(Comp);
dT = Tc - T0;
end
where I've managed to get to:
CTD_TempTimelagCorrection <- function(temp,Tau,t){
## Define which equation to use based on duration of lag and frequency
## With ESM2 profiler sampling # 2hz: N1>tau/t = TRUE
N1 = Tau/t
Tc = temp
N = 3
for(i in ceiling(N/2):length(temp)-ceiling(N/2)){
A = matrix(nrow=N,ncol=1)
# Compute weights
for(k in 1:N){
A[k] = (1/N) + N1 * ((12*k - (6*(N+1))) / (N*(N^2 - 1)))
}
A = A/sum(A)
# Verify unity
if(sum(A) != 1){
print("Error: Sum of weights is not unity")
}
Comp = matrix(nrow=N,ncol=1)
# Compute components
for(k in 1:N){
Comp[k] = A[k]*temp[i - (ceiling(N/2)) + k]
}
Tc[i] = sum(Comp)
dT = Tc - temp
}
return(dT)
}
I think the problem is the Comp[k] line, could someone point out what I've done wrong? I'm not sure I can select the elements of the array in such a way.
by the way, Tau = 1, t = 0.5 and temp (or T0) will be a vector.
Thanks
edit: apparently my description is too brief in explaining my code samples, not really sure what more I could write that would be relevant and not just wasting peoples time. Is this enough Mr Filter?
The error is as follows:
Error in Comp[k] = A[k] * temp[i - (ceiling(N/2)) + k] :
replacement has length zero
In addition: Warning message:
In Comp[k] = A[k] * temp[i - (ceiling(N/2)) + k] :
number of items to replace is not a multiple of replacement length
If you write print(i - (ceiling(N/2)) + k) before that line, you will see that you are using incorrect indices for temp[i - (ceiling(N/2)) + k], which means that nothing is returned to be inserted into Comp[k]. I assume this problem is due to Matlab allowing the use of 0 as an index and not R, and the way negative indices are handled (they don't work the same in both languages). You need to implement a fix to return the correct indices.

Resources