Simulation's problem in a fermentor with Monod kinetics - scilab

I am simulating a non-linear model (fermentor with Monod kinetics), but I found the output of this model is still fixed at the equilibrium points (initial conditions), Q is input and I used xcos to generate a step input (as a vector) with initial value=2, final value=2.2 and step time=20. I am going to attach my figure. Thanks in advance.
Output figure
funcprot(0)
t1=A.time
Q=A.values(:,1)
function dxdt=f(t,x)
S=x(1);
X=x(2);
dxdt=[(Q/20)*(0.02-S)-(0.4/0.67)*(S*X)/(0.015+S)
(-Q*X/20)+0.4*((S*X)/(0.015+S))]
endfunction
t=linspace(0,100,999);
x0=[0.005;0.0101];
x=ode(x0,0,t,f)
scf(0)
plot(t1,x(2,:))
xlabel('time')
ylabel('X')
ax=gca(),// gat the handle on the current axes
ax.data_bounds=[0 0;100 2.5]; //axis lim

Related

Solve equation of mean field theory with Scilab

i try to evaluate the equation =tanh(zJ/k_B T)
using newton rampson method. when i run the program i got an error:
plot: Wrong size for input arguments #2 and #3: Incompatible
dimensions.
and a blank graph. plz help me out whats the problem in my code.
z=4\\ no. of nearest neighbours
m=1;//value of J/K
T=[0.1:0.1:8]\\value of temerature
s(1)=-0.5;//initial value
n=100;\\no. of iterations
for i=1:n \\ running of for loop
f= s(i)-tanh(z*m*s(i)./T);//equation of mean field theory
l=derivat(f); // derivative of f
s(i+1)=s(i)-(f/l); // implementation of newton rampson method
if abs(s(i+1)-s(i)) <= 10^-8 then
end
s(i)=s(i+1)
i=i+1;//increment in i values
end
plot(T,s,'.r')
What I understand after reading about equation of mean field theory is that you want to solve s=tanh(zms/T) for different values of T then plot s versus T. Here is how you can do it in Scilab (no need to code Newton's method yourself you can use fsolve (see the help page of this function)
function out = eq(s,T)
out = s-tanh(z*m*s/T)
end
z=4;
m=1;
T=[0.1:0.1:8];
for i=1:length(T)
s(i) = fsolve(-0.5,list(eq,T(i)))
end
clf
plot(T,s)
xlabel T
ylabel s

Beta regression model in R

Please again accept my apologies for my little knowledge in R. I'm, trying to get better! I'm a biologist and my statistical knowledge is sadly low
I have the following data set:
Perc_Reacting,Pulses,IndMutant,Proportion
93,1,1,0.93
81,2,1,0.81
73,3,1,0.73
64,4,1,0.64
73,5,1,0.73
68,6,1,0.68
64,7,1,0.64
65,8,1,0.65
50,9,1,0.5
68,10,1,0.68
57,11,1,0.57
50,12,1,0.5
62,13,1,0.62
44,14,1,0.44
54,15,1,0.54
56,16,1,0.56
50,17,1,0.5
42,18,1,0.42
42,19,1,0.42
29,20,1,0.29
96,1,0,0.96
100,2,0,1
92,3,0,0.92
96,4,0,0.96
92,5,0,0.92
92,6,0,0.92
84,7,0,0.84
96,8,0,0.96
91,9,0,0.91
82,10,0,0.82
86,11,0,0.86
82,12,0,0.82
91,13,0,0.91
85,14,0,0.85
83,15,0,0.83
70,16,0,0.7
74,17,0,0.74
64,18,0,0.64
68,19,0,0.68
78,20,0,0.78
The first and last rows are the same, one expressed in % an the other in a 1-0 proportion
I need to run a Beta regression model, but when I try to create the model an error jumps:
model.beta<-betareg(C_elegans$Proportion~C_elegans$Pulses)
Error in betareg(C_elegans$Proportion ~ C_elegans$Pulses) :
invalid dependent variable, all observations must be in (0, 1)
Could you help me to create a beta regression model for this data and how to make relevant plots to show it fits good?
Also I need to propose a linear regression model for this data, can anyone let me know how you think it could be done better?
Here are the results of fitting the last three columns to a flat surface plane equation "Proportion = a + (b * Pulses) + (c * IndMutant)" with parameters a = 1.0468289473684214E+00, b = -1.8650375939849695E-02, and c = -2.5850000000000006E-01 yielding R-squared = 0.876 and RMSE = 0.064.
(here "absolute error" means "not relative error")

How to remove two data points from a data set that have a large influence on the regression model

I have found two outlier data points in my data set but I don't know how to remove them. All of the guides that I have found online seem to emphasize plotting the data but my question does not require plotting, it only takes regression model fitting. I am having great difficulty finding out how to remove the two data points from my data set and then fitting the new data set with a new model.
Here is the code that I have written and the outliers that I found:
library(alr4)
library(MASS)
data(lathe1)
head(lathe1)
y=lathe1$Life
x1=lathe1$Speed
x2=lathe1$Feed
x1_square=(x1)^2
x2_square=(x2)^2
#part A (Box-Cox method show log transformation)
y.regression=lm(y~x1+x2+(x1)^2+(x2)^2+(x1*x2))
mod=boxcox(y.regression, data=lathe1, lambda = seq(-1, 1, length=10))
best.lam=mod$x[which(mod$y==max(mod$y))]
best.lam
#part B (null-hypothesis F-test)
y.regression1_Reduced=lm(log(y)~1)
y.regression1=lm(log(y)~x1+x2+x1_square+x2_square+(x1*x2))
anova(y.regression1_Reduced, y.regression1)
#part D (F-test of log(Y) without beta1)
y.regression2=lm(log(y)~x2+x2_square)
anova(y.regression1_Reduced, y.regression2)
#part E (Cook's distance and refit)
cooks.distance(y.regression1)
Outliers:
9 10
0.7611370235 0.7088115474
I think you may be able (if execution time / corpus size allows it) to pass through your data using a loop and copy / remove elems by your criteria to obtain your desired result e.g.
corpus_list_without_outliers = []
for elem in corpus_list:
if(elem.speed <= 10000) # elem.[any_param_name] < arbitrary_outlier_value
# push to corpus_list_without_outliers because it is OK :)
print corpus_list_without_outliers
# regression algorithm after
this is how I'd see the situation, but you can change the above-if with a remove statement to avoid the creation of a second list etc. e.g.
for elem in corpus_list:
if(elem.speed > 10000) # elem.[any_param_name]
# remove from current corpus because it is an outlier :(
print corpus_list
# regression algorithm after
Hope it helped you!

How does the ODE function in R do the calculation

I am using the ODE function In R in order to solve this equation:
library(deSolve)
FluidH <- function(t,state,parameters) {
with(as.list(c(state,parameters)),
dh <- Qin/A - ((5073.3*h^2+6430.1*h)/(60*A))
list(c(dh))
})
}
parameters <- c(Qin =10, A=6200)
state<- c(h=0.35)
time <- seq(0,2000,by=1)
out <- ode(y= state, func = FluidH, parms = parameters, times = time)
I might be missing something with math, but when I try to calculate h by myself by assigning the initial state I don't get the same numbers as the output of the function!
for example to calculate h at time 1 : h=h0+ dh*dt -> h= 0.35 + 10/6200 - ((5073.3*h^2+6430.1*h)/(60*6200))=0.3438924348
and the output of ode gives 0.343973044412394
Can anyone tell what am I missing?
You computed the Euler step with step size dt=1. The solver uses a higher order method with (usually) a smaller step size that is adapted to meet the default error tolerances of 1e-6 for relative and absolute error. The step-size 1 that you give only determines where the numerical solution is sampled for the output, internally the solver may use many more or sometimes even less steps (interpolating the output values).

Solution of varying coefficients ODE

I have a set of observed raw data and use 2nd order ODE to fit the data
y''+b1(t)y'+b0(t)y = 0
The b1 and b0 are time-dependent and I use principal differential analysis(PDA) (R-package: fda, function: pda.fd)to get the estimate of b1(t) and b0(t) .
To check the validity of the estimates of b1(t) and b0(t), I use collocation method (R-package bvpSolve, function:bvpcol) to get the numerical solution of the ODE and compare the solution with the smoothing curve fitting of the raw data.
My question is that my numerical solution from bvpcol can caputure the shape of the fitting curve but not for the value of the function. There are different in term of some constant multiples.
(Since I am not allowed to post images,please see the link for figure)
See the figure of my output. The gray dot is my raw data, the red line is Fourier expansion of the raw data, the green line is numerical solution of bvpcol function and the blue line the green-line/1.62. We can see the green line can capture the shape but with values that are constant times of fourier expansion.
I fit several other data and have similar situation but different constant. I am wondering it is the problem of numerical solution of ODE or some other reasons and how to solve this problem to get a good accordance between numerical solution(green) and true Fourier expansion?
Any help and idea is appreciated!
Here is a raw data and code:
RData is here
library(fda)
library(bvpSolve)
# load the data
load('y.RData')
tvec = 1:length(y)
tvec = (tvec-min(tvec))/(max(tvec)-min(tvec))
# create basis
fbasis = create.fourier.basis(c(0,1),nbasis=nbasis)
bbasis = create.bspline.basis(c(0,1),norder=8,nbasis=47)
bfdPar = fdPar(bbasis)
yfd = smooth.basis(tvec,y,fbasis)$fd
yfdlist = list(yfd)
bwtlist = rep(list(bfdPar),2)
# PDA fit
bwt = pda.fd(yfdlist,bwtlist)$bwtlist
# output of estimated coefficients
beta0.fd<-bwt[[1]]$fd
beta1.fd<-bwt[[2]]$fd
# define the vary-coef function in terms of t
fbeta0<-function(t)eval.fd(t,beta0.fd)
fbeta1<-function(t)eval.fd(t,beta1.fd)
# define 2nd order ODE
fun2 <- function(t,y,pars) {
with(as.list(c(y,pars)),{
beta0 = pars[[1]];
beta1 = pars[[2]];
dy1 = y[2]
dy2 = -beta1(t)*y[2]-beta0(t)*y[1]
return(list(c(dy1,dy2)))
})
}
# BVP
yinit<-c(p1[1],NA)
yend<-c(p1[length(p1)],NA)
t<-seq(tvec[1],tvec[length(tvec)],0.005)
col<-bvpcol(yini=yinit,yend=yend,x=t,func=fun2,parms=c(fbeta0,fbeta1),atol=1e-5,islin=T)
# plot output
plot(col[,1],col[,2],col='green',type='l')
points(tvec,p1,col='darkgray')
lines(yfd,col='red',lwd=2)
lines(col[,1],col[,2],col='green',type='l')
lines(col[,1],col[,2]/1.62,col='blue',type='l',lwd=2,lty=4)
legend('topleft',col=c('green','darkgray','red','blue'),
legend=c('ODE solution','raw data','basis curve fitting','ODE solution/1.62'),lty=1)

Resources