I need to filter the imaginary part of a forward fourier transform.
I've been trying to use filter.complex (but R keeps saying function filter.complex does not exist)
I've used only filter, and I get the following warning message-
imaginary parts discarded in coercion
Please tell me if I'm missing something
This is an example:
x = fft(rec-mean(rec))/sqrt(length(rec))
y = fft(soi-mean(soi))/sqrt(length(soi))
fyx = filter.complex(y * Conj(x), rep(1, 15), sides = 2, circular = TRUE)
I tried unsuccessfully to find a filter.complex function (and I'm reasonably good art searching such things out.) I don't think it exists. I think you may have gotten a hold of some old S code that was set up for dispatching to the complex class. If I trim the command to just:
fyx = filter(y * Conj(x), rep(1, 15), sides = 2, circular = TRUE)
...., I get the identical message. It's apparently from the deeper code called at the C-level since that message is not visible in the R-code that appears when you just type "filter" at the command line. Searching with Google for the error message shows it's probably coming from: http://svn.r-project.org/R/trunk/src/main/coerce.c .
It is just a warning, and not necessarily evidence of wrong-doing on your part.
I figured out what was wrong. The function 'filter.complex' was not defined in any package.
So, I defined it as follows-
filter.complex=function(x,...){complex(real=filter(Re(x),...), imag=filter(Im(x),...))}
This filtered the imaginary part that I needed.
Related
I am using the synth package in R to implement a synthetic control method, where I use the dataprep() function to construct the appropriate matrices to be passed to synth(). That is, I call dataprep() as follows:
dataprep_out <- dataprep(foo = csv_data,
predictors = vars_dep,
predictors.op = "mean",
time.predictors.prior = 2000:2010,
dependent = "Log_gdp",
unit.variable = "REG_FACTOR",
unit.names.variable = "REG_ID",
time.variable = "Year",
treatment.identifier = my_factor_treated,
controls.identifier = my_controls,
time.optimize.ssr = 2000:2010,
time.plot = 2000:2017
)
after which I call synth():
synth_out <- synth(data.prep.obj = dataprep_out)
This works fine and gives me the results I expect. However, when I repeat the same piece of code for another treated observation but with exactly the same controls (i.e., my_factor_treated is the only argument in dataprep() that has changed), I get the following error upon calling synth():
Error in svd(c) : infinite or missing values in 'x'.
I am struggling to find the cause of this error, also because I am unsure which object is being passed to the svd() function during the execution of synth(). None of the columns in the objects returned by dataprep() contain only zeroes, and they contain no Inf values (which makes sense, because otherwise this error should have occurred on the first treated observation as well, right?).
I would appreciate if someone could tell me why this error occurs and how I can prevent it. I have checked out multiple related questions but haven't been able to find my answer. Thanks!
PS. I am not sure how to provide a suitable MWE since I guess my problem is data-related and I won't be able to share the dataset that I am using.
I encountered the same issue, and the same as you did I confirmed there is no missing value or all 0's in my data set. Later I realized this is caused by optimization algorithm used in generating weights. One thing you can try is to add argument "optimxmethod='All'" in synth function. This will try all available methods and report you the one with best performance.
I'm using rbacon to create an age model. I am using the default core that comes with the package "MSB2K". The documentation says to add a slump, insert the code: slump=c(). Requires pairs of depths, e.g., slump=c(10,15,60,67) for slumps at 67-60 and 15-10 cm core depth.
I try the following code but it gives me an error
Bacon("MSB2K", slump = c(30, 34))
Error in if (!is.na(hiatus.depths)[1]) { :
missing value where TRUE/FALSE needed
Does anyone know why this might be?
I have zero domain knowledge here, but technically, it appears that to use the slump argument, one has to pair it will the hiatus.depths argument, and the latter has to be within the range of the former. For example,
Bacon("MSB2K", slump=c(30,34), hiatus.depths=32)
will work. I suppose the details are in the associated primary literature.
THis is probably a very silly question, but how can I check if a function written by myself will work or not?
I'm writing a not very simple function involving many other functions and loops and was wondering if there are any ways to check for errors/bugs, or simply just check if the function will work. Do I just create a simple fake data frame and test on it?
As suggested by other users in the comment, I have added the part of the function that I have written. So basically I have a data frame with good and bad data, and bad data are marked with flags. I want to write a function that allows me to produce plots as usual (with the flag points) when user sets flag.option to 1, and remove the flag points from the plot when user sets flag.option to 0.
AIR.plot <- function(mydata, flag.option) {
if (flag.option == 1) {
par(mfrow(2,1))
conc <- tapply(mydata$CO2, format(mydata$date, "%Y-%m-%d %T"), mean)
dates <- seq(mydata$date[1], mydata$date[nrow(mydata(mydata))], length = nrow(conc))
plot(dates, conc,
type = "p",
col = "blue",
xlab = "day",
ylab = "CO2"), error = function(e) plot.new(type = "n")
barplot(mydata$lines, horiz = TRUE, col = c("red", "blue")) # this is just a small bar plot on the bottom that specifies which sample-taking line (red or blue) is providing the samples
} else if (flag.option == 0) {
# I haven't figured out how to write this part yet but essentially I want to remove all
# of the rows with flags on
}
}
Thanks in advance, I'm not an experienced R user yet so please help me.
Before we (meaning, at my workplace) release any code to our production environment we run through a series of testing procedures to make sure our code behaves the way we want it to. It usually involves several people with different perspectives on the code.
Ideally, such verification should start before you write any code. Some questions you should be able to answer are:
What should the code do?
What inputs should it accept? (including type, ranges, etc)
What should the output look like?
How will it handle missing values?
How will it handle NULL values?
How will it handle zero-length values?
If you prepare a list of requirements and write your documentation before you begin writing any code, the probability of success goes up pretty quickly. Naturally, as you begin writing your code, you may find that your requirements need to be adjusted, or the function arguments need to be modified. That's okay, but document those changes when they happen.
While you are writing your function, use a package like assertthat or checkmate to write as many argument checks as you need in your code. Some of the best, most reliable code where I work consists of about 100 lines of argument checks and 3-4 lines of what the code actually is intended to do. It may seem like overkill, but you prevent a lot of problems from bad inputs that you never intended for users to provide.
When you've finished writing your function, you should at this point have a list of requirements and clearly documented expectations of your arguments. This is where you make use of the testthat package.
Write tests that verify all of the requirements you wrote are met.
Write tests that verify you can no put in unintended inputs and get the results you want.
Write tests that verify you get the output you intended on your test data.
Write tests that test any edge cases you can think of.
It can take a long time to write all of these tests, but once it is done, any further development is easier to check since anything that violates your existing requirements should fail the test.
That being said, I'm really bad at following this process in my own work. I have the tendency to write code, then document what I did. But the best code I've written has been where I've planned it out conceptually, wrote my documentation, coded, and then tested against my documentation.
As #antoine-sac pointed out in the links, some things cannot be checked programmatically; for example, if your function terminates.
Looking at it pragmatically, have a look at the packages assertthat and testthat. assertthat will help you insert checks of results "in between", testthat is for writing proper tests. Yes, the usual way of writing tests is creating a small test example including test data.
I'm using the mice package to interpolate some missing values. I've successfully been using mice in many cases without any problem. However I am now facing an unprecedented problem, that is, after the first iteration I get the following error:
mice(my_data)
iter imp variable
1 1 sunlight
Show Traceback
Rerun with Debug
Error in cor(xobs[, keep, drop = FALSE], use = "all.obs") : 'x' is empty
I have tried to look in the documentation but I cannot find anything useful. I looked up the error on the internet and found this https://stat.ethz.ch/pipermail/r-help/2015-December/434914.html but I was unable to find the answer to the problem described.
Sadly I cannot provide a working example of the data since my_data contains private data that I do not own and therefore cannot make publicly available. my_data is a dplyr dataframe however it looks like there's no difference in using a dplyr or a "base" dataframe.
Could anyone please explain me what is happening and (possibly) how to fix it? Thank you.
EDIT: added some more info on traceback:
cor(xobs[, keep, drop = FALSE], use = "all.obs")
4 remove.lindep(x, y, ry, ...)
3 sampler(p, data, m, imp, r, visitSequence, c(from, to), printFlag,
...)
2 mice::mice(my_data)
Very possible, some columns in the data input are overly correlated that certain methods of imputation are not applicable.
I've built a large function which calls numerous gbm functions in a big loop. All I'm trying to do is increase the thickness of the tickmarks in rug() which is called by gbm.plot.
I was hoping to use (e.g.)
body(gbm.plot)[[24]][[4]][[3]][[3]][[3]][[3]][[2]]$ylab <- "change value"
From this page's examples, which I've used successfully elsewhere, but the section in question in gbm.plot is in an IF statement, so as.list doesn't nicely recurse the lines (because arguably it's all one huge long line). You can get to them by just manually [[trying]][[successive]][[combinations]] until you get to the right place, but since I'm trying to insert a piece of code , lwd=6 into a bracketed statement, rather than assigning a value to a named subobject, I'm not sure how to get trace to do this.
?trace says:
When the at argument is supplied, it can be a vector of integers referring to the substeps of the body of the function (this only works if the body of the function is enclosed in { ...}. In this case tracer is not called on entry, but instead just before evaluating each of the steps listed in at. (Hint: you don't want to try to count the steps in the printed version of a function; instead, look at as.list(body(f)) to get the numbers associated with the steps in function f.)
The at argument can also be a list of integer vectors. In this case, each vector refers to a step nested within another step of the function. For example, at = list(c(3,4)) will call the tracer just before the fourth step of the third step of the function
So I tried pasting the whole line with the lwd bit added in, hoping that it would overwrite it with the small addition:
trace (gbm.plot, quote(rug(quantile(data[, gbm.call$gbm.x[variable.no]], probs = seq(0, 1, 0.1), na.rm = TRUE),lwd=6)), at=c(22,4,7,3,3,3,2))
...as well as putting objects in & out of {brackets}, all to no avail. Does anyone know either the correct way of using trace for this, or can suggest a better way? Thanks
p.s. it needs to be done automatically with coding so users can load the function which will load the vanilla gbm functions from CRAN and then make tweaks as required.
EDIT: found a workaround. But generalisable question: how can one insert elements into an IF statemented part of a function? e.g. From
rug(quantile(data[, gbm.call$gbm.x[variable.no]], probs=seq(0, 1, 0.1), na.rm=TRUE))
to
rug(quantile(data[, gbm.call$gbm.x[variable.no]], probs=seq(0, 1, 0.1), na.rm=TRUE),lwd=6)