thanks to the truely amazing community my project group is one step closer to mimic realistic calcium baseline noise.
I simulated a typical calcium movement in a mathematical model:
Thanks to the community I could add random noise to the unrealistic baseline:
However, the noise dynamic is actually too fast. Is there a way to slow down the noise and create broader noise peaks instead of these spikes. I add an actual measurement to show you what I mean:
If this question is too specific, I apologize and will delete the post.
Best wishes and many thanks!
Please make your question and examples reproducible so that others can help. That being said, it looks like the baseline is a just a random normal -- probably created with something like x <- rnorm(500). One way to make this less jumpy is calculate a moving average. You could use a package like TTR or zoo to do this, or you can create your own function. For example:
x <- rnorm(500)
plot(x, type = "l")
ma <- function(x, n = 5){ filter(x, rep(1/n, n), sides = 2) }
plot(ma(x), type = "l")
plot(ma(x, 10), type = "l")
I see your point now. I have two suggestions for this case, maybe they will be of help :
Try to add noise to only a subset of your base line ( following is a 10%)
baseline.index = which(App[,2] == min(App[,2]))
baseline.index.subset = sample(x = baseline.index, size = 0.1 * length ( baseline.index) , replace = F)
noise = rnorm( length (baseline.index.subset))
App[ baseline.index.subset,2] = App[ baseline.index.subset,2] + noise
And try to play a bit with the mean and standard deviation of the noise. ie:
noise = rnorm( length (baseline.index.subset), mean = 0, sd = 0.1)
Let us know if this helps
Related
I´m working on the topic of calculating the robust working range of a process. For this purpose I´m building models from DOE data and simulating data with a monte carlo approach. Filtering the data with a criteria for the response leads to a allowed space (see plots for better visualization).
In the example below, there are 3 variables and the goal is to calculate the biggest possible square (in parallel with the axis) within the allowed room. This would describe the working range of the process. The coding is just to get every variable in the same range (-1 to 1).
library(tidyverse)
library(MASS)
library(ggplot2)
library(gridExtra)
library(rgl)
df<-data.frame(
X1=runif(100,0,2),
X2=runif(100,10,30),
X3=runif(100,5,75))%>%
mutate(Y1=2*X1-2*X2+X3)
f1<-Y1~X1+X2+X3
model1<- lm(f1, data=df)
m.c <- NULL
n=10000
for (k in 1:n)
{
X1=runif(1,0,2)
X2=runif(1,10,30)
X3=runif(1,5,75)
m.c = rbind(m.c, data.frame(X1, X2, X3))
}
m.c_coded<-m.c%>%
mutate(predict1=predict(model1, newdata = .))%>%
mutate(X1=(X1-1/1))%>%
mutate(X2=(X2-20)/10)%>%
mutate(X3=(X3-40)/35)
Space<- m.c_coded%>%
filter(predict1<=0)
p1<-ggplot(Space)+
geom_point(aes(X1, X2))+
xlim(-1,1)+
ylim(-1,1)
p2<-ggplot(Space)+
geom_point(aes(X1, X3))+
xlim(-1,1)+
ylim(-1,1)
p3<-ggplot(Space)+
geom_point(aes(X2, X3))+
xlim(-1,1)+
ylim(-1,1)
grid.arrange(arrangeGrob(p1,p2,p3, nrow = 1), nrow = 1)
MODR_plot3D<-plot3d( x=Space$X1, y=Space$X2, z=Space$X3, type = "p",
xlim = (c(-1,1)), ylim(c(-1,1)), zlim = (c(-1,1))
)
There are specialized programms for that (DOE software) which can calculate this so called Design-space, but I want to implement it in my R skript. Sadly I do not have any idea, how I can calculate the position (edges) of this square. My approach would be to find the maximum distance to the surface on (center of the square).
Does anyone an idea how I can calculate this cube in a proper way? If possible I want to extend this also for the n-dimensional room.
Using R, before assessing some metric of accuracy on my own manual implementation of DFT, I wanted to do a sanity check on how well stats::fft() performs by doing the following:
sig.ts = ts( sin(2*pi*freq1*t) + sin(2*pi*freq2*t) );
sig.rt = fft(fft(sig.ts)/N, inverse="true");
#the two plots so perfectly align that you can't see them both
max(abs(sig.ts - sig.rt)) / max(sig.ts);
#arbitrary crude accuracy metric=1.230e-15 - EXCELLENT!
But I wanted to write the code for DFT myself, to ensure I understand it, then invert it in the hopes that it would be the same:
##The following is the slow DFT for now, not the FFT...
sR = 102.4; #the number of Hz at which we sample
freq1=3; freq2=12; #frequency(ies) of the wave
t = seq(1/sR,10, 1/sR);
sig.ts = ts( sin(2*pi*freq1*t) + sin(2*pi*freq2*t) );
N=length(t); kk=seq(0,N/2-1, 1); nn=seq(0,N-1, 1);
for(k in kk){
sig.freqd[k]=0;
for(n in nn){
sig.freqd[k] = sig.freqd[k] + sig.ts[n+1]*exp(-j*2*pi*n*k/N); } }
sig.freqd = (1/N)*sig.freqd; #for Normalization
#Checking the "accuracy" of my manual implementation of DFT...
sig.freqd_inv=Re(fft(sig.freqd, inverse="true"));
plot(t[1:100], window(sig.ts,end=100), col="black", type="l",lty=1,lwd=1, xaxt="n");
lines(t[1:100],window(sig.freqd_inv,end=100), col="red", type="l",lty=1,lwd=1, xaxt="n");
axis(1, at=seq(round(t[1],1),round(t[length(t)],1), by=0.1), las=2);
max(abs(sig.ts[1:(N/2-1)] - sig.freqd_inv)) / max(sig.ts[1:(N/2-1)]); #the metric here =1.482 unfortunately
Even without the metric, the plot makes it obvious that something's off here - it's lower amplitude, maybe out of phase, and more jagged. In all of my self-studying, I will say that I am a bit confused about how sensitive this all is to vector length..as well as how to ensure that the imaginary component's phase information is taken into account when plotting.
Bottom line, any insight into what's wrong with my DFT algorithm would be helpful. I don't want to just blackbox my use of functions - I want to understand these things more deeply before moving on to more complicated functions.
Thanks,
Christian
The main issues arise from the signal indexing. First to get a full transform usable by R's fft(..., inverse = TRUE), you would need to compute all N coefficients (even if the coefficients above N/2-1 could be obtained by symmetry).
Then you should realize that array indexing in R are 1-based. So, while indexing sig.freqd[k], the index k should start at 1 instead of 0. Since the argument to exp(-1i*2*pi*n*k/N) should start with n=0andk=0`, you'll need to adjust the indices:
kk=seq(1,N, 1); nn=seq(1,N, 1);
for(k in kk){
sig.freqd[k]=0i;
for(n in nn){
sig.freqd[k] = sig.freqd[k] + sig.ts[n]*exp(-1i*2*pi*(n-1)*(k-1)/N);
}
}
I've also changed you usage of j to represent the imaginary number 1i since that's the usual notation recognized by R (and R was complaining about it when trying your posted sample as-is). If you had defined j=1i that shouldn't affect the results.
Note also that R's fft is unnormalized. So to obtain the same result for the forward transform, your DFT implementation should not include the 1/N normalization. On the other hand, you will need to add this factor as a final step in order to get the full-circle forward+backward transform to match the original signal.
With these changes you should have the following code:
##The following is the slow DFT for now, not the FFT...
sR = 102.4; #the number of Hz at which we sample
freq1=3; freq2=12; #frequency(ies) of the wave
t = seq(1/sR,10, 1/sR);
sig.ts = ts( sin(2*pi*freq1*t) + sin(2*pi*freq2*t) );
N=length(t); kk=seq(1,N, 1); nn=seq(1,N, 1);
for(k in kk){
sig.freqd[k]=0i;
for(n in nn){
sig.freqd[k] = sig.freqd[k] + sig.ts[n]*exp(-1i*2*pi*(n-1)*(k-1)/N);
}
}
#Checking the "accuracy" of my manual implementation of DFT...
sig.freqd_inv=(1/N)*Re(fft(sig.freqd, inverse="true"));
plot(t[1:100], window(sig.ts,end=100), col="black", type="l",lty=1,lwd=2, xaxt="n");
lines(t[1:100],window(sig.freqd_inv,end=100), col="red", type="l",lty=2,lwd=1, xaxt="n");
axis(1, at=seq(round(t[1],1),round(t[length(t)],1), by=0.1), las=2);
max(abs(sig.ts - sig.freqd_inv)) / max(sig.ts)
This should yield a metric around 1.814886e-13, which is probably more in line with what you were expecting. The corresponding plot should also be showing the orignal signal and the roundtrip signal overlapping:
I have „newbie” question related to basic math. I am trying to make sine wave with “unequal radians” (at least I believe this is what I am trying to do). In other words: I need function that for first couple periods (x) is “faster” and gradually slows down (“cycles” are wider/longer) as x approaches infinity. Here is code and sketch of what I am trying to do:
x <- seq(1, 30, by=0.1) # my x
z <- ifelse(x <= 10, 3, ifelse(x <= 20, 2, 1)) # discrete value to modify x
y <- sin(z*x) # my y(x)
plot(y, type="l") # plot y(x)
and sketch (result of plot):
Ignore the “double peak” and other distortions, they are result of fact that z is discrete variable. I would like to make z continuous and make each cycle to widen smoothly. What mathematical function should I use here? I tried damped sine wave but this is not quite what I am going for.
Direct transcription from the Wikipedia page linked by #keziah: here's a function
chirp <- function(t,phi0=0,f0=1,k=1) sin(phi0 + 2*pi*(f0*t+k*t^2/2))
phi0 is the initial phase
k is the rate of frequency change or chirpyness.
f0 is the initial frequency
par(las=1,bty="l",mfrow=c(1,2))
curve(chirp(x),from=0,to=5,n=501)
curve(chirp(x,k=-1,f0=5),from=0,to=5,n=501)
This isn't really a full answer, as I can't give you code off the top of my head, but what you're looking for here is a chirp. There are a few different types of chirp, depending on the rate of change of phase that you want, but I'm guessing you probably want a linear chirp Wikipedia
R may well have a function/module that can provide this already.
How do I plot decision boundary from weight vector?
My original data is 2-dimensional but non-linearly separable so I used a polynomial transformation of order 2 and therefore I ended up with a 6-dimensional weight vector.
Here's the code I used to generate my data:
polar2cart <- function(theta,R,x,y){
x = x+cos(theta) * R
y = y+sin(theta) * R
c=matrix(x,ncol=1000)
c=rbind(c,y)
}
cart2polar <- function(x, y)
{
r <- sqrt(x^2 + y^2)
t <- atan(y/x)
c(r,t)
}
R=5
eps=5
sep=-5
c1<-polar2cart(pi*runif(1000,0,1),runif(1000,0,eps)+R,0,0)
c2<-polar2cart(-pi*runif(1000,0,1),runif(1000,0,eps)+R,R+eps/2,-sep)
data <- data.frame("x" = append(c1[1,], c2[1,]), "y" = append(c1[2,], c2[2,]))
labels <- append(rep(1,1000), rep(-1, 1000))
and here's how it is displayed (using ggplot2):
Thank you in advance.
EDIT: I'm sorry if I didn't provide enough information about the weight vector. The algorithm I'm using is pocket which is a variation of perceptron, which means that the output weight vector is the perpendicular vector that determines the hyper-plane in the feature space plus the bias . Therefore, the hyper-plane equation is , where are the variables. Now, since I used a polynomial transformation of order 2 to go from a 2-dimensional space to a 5-dimensional space, my variables are : and thus the equation for my decision boundary is:
So basically, my question is how do I go about drawing my decision boundary given
PS: I've found a solution while waiting, it might not be the best approach but, it gives the expected results. I'll share it as soon as I finish my project if anyone is interested. Meanwhile, I'd love to hear a better alternative.
I'm currently working on a project were I need to indentify lines which have the same kind of shape, e.g:
yrefer = c(0.2900,0.3189,0.4097,0.3609,0.3762,0.5849,0.7144)
For example take a look at the following plot, I want R to recognize these two red lines for example as a fitting shape, also if there is a little deviation(say 0.05 from the reference line(in blue)).
So I want to write a code which checks based on a list of y coördinates if these y coördinates fit the yrefer line, were a deviation of 0.05 is permitted.
I'm not sure if this is possible in R, but if it is I know there are people here that can help me out.
notice: What I mean with the deviation of 0.05:
let's say we have a line which is:
1.2900 1.3189 1.4097 1.3609 1.3762 1.5849 1.7144
this would be exactly the same line but then 1 higher then the yrefer line, but with the deviation of 0.05 I mean that if some y coördinates differ 0.05 from what you would expect them to be, so in this example I should expect them to be 1 higher for ever yRefer coördinate, but if one of them is 0.98 higher I would still accept this as a "fitting" line, because it's devation is < 0.05.
To clarify I drawed some possiblities(there are a lot more of course!) which should be accepted as correct for the first y-value:
I hope it's clear, if not let me know!
I don't think Johannes' answer generalizes, e.g:
y_ref = c(0, 0, 0)
y_test = c(.03,.03, -.06) #then test_line fails even though, let:
y_test = y_test +.011
abs(y_test - y_ref) #never outside the .05 range
test_line(y_test) #failed
I think you want something like:
n = length(y_test)
d1 = y_test[-1] - y_test[-n]
d2 = y_ref[-1] - y_ref[-n]
max(cumsum(d2 - d1)) - min(cumsum(d2 - d1)) #shouldn't be >= .1
Just account for the different y mean.
newline<- c(1.25, 1.3189, 1.4097, 1.4609, 1.3762, 1.5249, 1.7144)
newline2<-newline+mean(yrefer)-mean(newline)
sd(newline2-yrefer) #Can use var or whatever you want here.
This can all be packed into a function like.
lindev<- function(x){
newline2<-x+mean(yrefer)-mean(x)
return(sd(newline2-yrefer))}
lindev(c(1.25, 1.3189, 1.4097, 1.4609, 1.3762, 1.5249, 1.7144))
Note this will only work if the x coordinates are the same.
y_ref <- c(0.2900,0.3189,0.4097,0.3609,0.3762,0.5849,0.7144)
y_test_1 <- c(1.2900, 1.3187, 1.4097, 1.3609, 1.3762, 1.5849, 1.7144)
y_test_2 <- c(1.2900, 1.2189, 1.4097, 1.3609, 1.3762, 1.5849, 1.7144)
test_line <- function(y_test) {
overall_deviation <- mean(y_test - y_ref)
residuals <- y_test - y_ref - overall_deviation
if (any(abs(residuals) > 0.05)) message("Failed")
else message("Passed")
}
test_line(y_test_1)
test_line(y_test_2)