I have a question regarding the sympy in Jupter notebook. My main concern is how to display equations in compact ways in Sympy package. Here is the code
alpha = Symbol('\u03B1')
beta = Symbol('\u03B2')
theta = Symbol('\u03B8')
gamma = Symbol('\u03B3')
delta = Symbol(u'\u03B4')
beta = theta/((1-alpha)*(1-theta)+theta)
gamma = (beta*(alpha+(1-alpha)*delta)-alpha)/((1-alpha)*delta)
So when I print gamma, I get:
the problem is that I want to see the beta sympbol in the result and not its substituted version. Anyone has any idea? Thanks in advance.
Related
How can we numerically solve these equations using R when E_(μ,σ) (X)=1 and 〖var〗_(μ,σ) (X)=1 ? I am interested in finding the values of μ and σ.
Here α=(a-μ)/σ and β=(b-μ)/σ. I used the following code, but I'm not getting an answer. Is there any other code or method I may use to get what I want ?
mubar<-1
sigmabar<-1
a<-0.5
b<-5.5
model <- function(x)c(F1 = mubar-x[1]+x[2]*((pnorm((b-x[1])/x[2])-pnorm(a-x[1])/x[2])/(dnorm((b-x[1])/x[2])-dnorm((a-x[1])/x[2]))),
F2 = sigmabar^2-x[2]^2*(1-(((b-x[1])/x[2]*pnorm((b-x[1])/x[2])-(a-x[1])/x[2]*pnorm((a-x[1])/x[2]))/(dnorm((b-x[1])/x[2])-dnorm((a-x[1])/x[2])))-((pnorm((b-x[1])/x[2])-pnorm((a-x[1])/x[2]))/(dnorm((b-x[1])/x[2])-dnorm((a-x[1])/x[2])))^2) )
(ss <- multiroot(f = model, start = c(1, 1)))
I want to convert my articial neural network implementations to the new tensorflow 2 platform, where keras is an implicit part of (tf.keras). Are there any recommended sources that explain the implementation of ANNs using tensorflow 2/tf.keras within R?
Furthermore, why there is an extra keras package from F. Chollet available, when keras is as mentioned an implicit part of tensorflow now?
Sorry guys maybe for such basic questions, but my own searches were unfortunately not crowned with success.
From original tensorflow documentation I extract the following Python code:
input1 = keras.layers.Input(shape=(16,))
x1 = keras.layers.Dense(8, activation='relu')(input1)
input2 = keras.layers.Input(shape=(32,))
x2 = keras.layers.Dense(8, activation='relu')(input2)
added = keras.layers.add([x1, x2])
out = keras.layers.Dense(4)(added)
model = keras.models.Model(inputs=[input1, input2], outputs=out)
My own R conversions are
library(tensorflow)
k <- tf$keras
l <- k$layers
input1 <- k$layers$Input(shape = c(16,?))
x1 <- k$layers$Dense(units = 8, activation = "relu") (input1)
input2 <- k$layers$Input(shape = c(32,?))
x2 <- k$layers$Dense(units = 8, activation = "relu") (input2)
added <- k$layers$add(inputs = c(x1,x2))
My question hopefully seems not to be too stupid, but I've problems to implement a python tuple resp. scalar into its R equivalent. So my question: How must the shape argument in the input layers be converted into R?
I think the following page should provide the answer to your question: https://blogs.rstudio.com/ai/posts/2019-10-08-tf2-whatchanges/.
In essence, your code should stay the same if you are using Keras with a version 2.2.4.1 or above. For more details, refer to the linked site above.
I want to use interp2 function of MATLAB in Julia.
I tried GR module but I failed.
Now I'm using julia 0.64 version
Hope you guys can help me
Have a look at Interpolations.jl. This example is equivalent to the interp2 function:
A = rand(8,20)
knots = ([x^2 for x = 1:8], [0.2y for y = 1:20])
itp = interpolate(knots, A, Gridded(Linear()))
itp(4,1.2) # approximately A[2,6]
http://juliamath.github.io/Interpolations.jl/latest/control/#Gridded-interpolation-1
You might find the Dierckx.jl package helpful to achieve 2-d spline fitting. See for instance Spline2D.
I'm estimating a SVAR in R, but the A-B form results are very different than in Eviews, I'm not sure why it happened. Also, the option I think is right gives me error message. Could anyone help me?
Here is the R code I'm using:
resA <- matrix(NA, nrow = 5, ncol = 5)
resA[2,4]=resA[2,5]=resA[3,4]=resA[3,5]=resA[4,2]=resA[4,3]=resA[4,5]=0
resA[5,2]=resA[5,3]=resA[5,4]=0
resA[1,1]=resA[2,2]=resA[3,3]=resA[4,4]=resA[5,5]=1
resA
model=VAR(vardata, p=2, type="const")
summary(model)
stt=matrix(0.1, nrow = 1, ncol = 10)
model1=SVAR(model, Amat=resA, lrtest=TRUE, estmethod="scoring", start=stt, conv.crit=0.0001, max.iter=500)
summary(model1)
irf.gap=irf(model1, impulse="gap", boot=FALSE, n.ahead=15, runs=100)
plot(irf.gap)
The problem is the last command IRF. It gives me reversed shape than Eviews. I'm think that since Eviews only mentions that it is using Cholesky Decomposition with df adjusted(this thing should be relevant to CI's) and "Response to Cholesky one S.D. Innovations +/- 2 S.E.", I guess the problem should be from the one SD and 2SE, still not sure how the R command "irf" does...
BTW, the package of R I'm using is library(vars), and for Eviews I used default setting for IRF.
Updated:
the problem happened because command irf computes the structural impulse response function which is different from Eviews' Cholesky decomposition.
Anyone would share any link with steps to manually compute Eviews version of IRF is really appreciated!
Suppose somebody draw me an histogram and I want to smooth it, and get the smoothed function. Is their a way to do so in R? (The histogram is not coming from data, so kernel density estimators don't seem adapted. Please tell me if you think I am wrong on this.)
So far, I choose to fit a parametric distribution to my histogram. To do so I minimize the integrated square error between my histogram and a beta distribution. Here is my code, where h is a piece-wise constant function with support [0;1].
h<-function(x) (x>0 & x<1)*1
fit.beta<-function(h){
dist<-function(alpha,beta){
diff2<-function(x)(h(x)-dbeta(x,alpha,beta))^2
return(integrate(diff2,0,1))
}
res<-constrOptim(theta = c(1,1), f = dist,grad=NULL, ui = matrix(c(1,1),1,2), ci = c(0,0))
return<-res
}
And R says:
Error in dbeta(x, alpha, beta) :
argument "beta" is missing, with no default
I don't understand why R doesn't understand dbeta(x, alpha, beta). I also tryed with dbeta(x, shape1=alpha,shape2=beta) it doesn't work. Could you help me?
I found the solution to the syntax problem. The constrOptim function only optimize the first argument so it works if the optimized function as only one argument.
fit.norm<-function(h){
dist<-function(ab){
diff2<-function(x)(h(x)-dnorm(x,ab[1], ab[2]))^2
return(integrate(diff2,0,1)$value)
}
res<-constrOptim(theta = c(0.5,1), f = dist,grad=NULL, ui = rbind(c(1,0),c(-1,0),c(0,1)), ci = c(0,-1,0))
return<-list(res)
}