I'm trying to project some variables based on specific scalars but I'm having some trouble putting it together. The general idea is to make a lineal projection (or extension? excuse my english) based on the average change rate of each variable for the missing values at the end of the data in order to complete it. The main data looks like this:
| t | com | var1 | var2...
---------------------------
1 1 2.2 5.8
1 2 2.4 6.2
... ... ... ...
1 38 1.8 6.4
2 1 2.0 7.2
... ... ... ...
73 1 1.2 9.2
... ... ... ...
73 38 1.4 10.2
74 1 NA NA
... ... ... ...
104 38 NA NA
Basically 38 observations on 104 periods for a bunch variables, but some variables stop having values at t = 73. (The "..." are there to allow me to show the actual data size, not as a missing value representation)
I also have the scalars I need by com stored as Tx_Var1:
com | tx_var1
1 2.3
2 1.7
... ...
38 4.5
for every variable I need. These scalars are simply the average change rate for each variable by com, so the actual solution may not have to use it. I just built it because I'm trying to solve this step by step.
What I'm looking for is a way to complete the main data for these variables when t >= 73 and the variable is NA using var1 = lag(var1)*(1+tx_var1) and this would have to be by com.
I believe I need to mutate grouping_by com but I don't know how to call the scalar value from Tx_Var1, Tx_Var2, etc... into the code and combine that with the t and NA restrictions. I am also not sure about how to work with the NA restriction and the lag(var) part because it would need the previous value to complete the data for each row. I'm currently looking into Complete and Fill functions to see if there's a less complicated way to make this work.
Any help on this problem would be greatly appreciated.
Thanks,
Related
Hi everyone ! I hope you are having a great day.
Aim and context
My two dataframes are built from different methods, but measure the same parameters on the same signals.
I’d like to match every signals in the first dataframe with the same signal in the second dataframe, to compare the parameter values, and evaluate the methods against each other.
I would gratefully appreciate any help, as I reached my beginner’s limits in R coding but also in dataframe management.
Basically, I would like to find matches in two separate dataframes and consider that the matches are refering to the same entity (for instance along the creation an ID variable), in order to perform statistical analysis for paired data.
I could have made the matches by hand on a spreadsheet, but because there are hundreds of entries and more comparisons to come, I’d like to automatize the matching and creation of dataframe.
To give you an idea, my dataframes look like this :
DF1
Recording
Selection
Start (ms)
Freq.max (kHz)
001
1
11.3
42.4
001
2
122.9
46.2
001
3
232.3
47.5
002
1
22.9
30.9
002
2
512.4
31.3
My second dataframe would look something like this :
DF2
Recording
Selection
Start (ms)
Freq.max (kHz)
001
1
10.9
41.8
001
2
122.1
44.5
001
3
231.3
44.4
002
1
513.0
30.2
My ideas
I thought about identifying each signal, but
An ID using "Recording + selection" (001_1, 001_2...) would not work because some signals are not detected in both methods.
So I'd want to use the start position to identify the signals, but rounding to the closest or upper/lower value would not match all the signals.
Hmisc::find.matches() function
I tried the function find.matches() from the package Hmisc, that gives the matches of your columns, given the tolerance threshold you input.
find <- find.matches(DF_method1$start_one, DF_method2$start_two, tol=(2))
(I arbitrarily chose a tolerance of 2 ms, for it to be considered as the same signal)
The output looks like this :
Matches:
Match #1 Match #2 Match #3
[1,] 1 7 0
[2,] 2 42 0
[3,] 3 0 0
[4,] 4 0 0
[5,] 0 0 0
[6,] 5 0 0
[7,] 22 6 0
I feel like it is coming together but I am stuck to these two questions :
How to find the closest match among each recording, not comparing all signals in all recordings ? (example here, all 1st matches are correctly identified, except n°7, matched with n°22, which is from a different recording) How could I run the function, within each recording ?
How to create a dataframe from the output ? Which would be a dataframe with only the signals that had a match and their related parameter values.
I feel like this function gets close to my aim but if you have any other suggestion, I am all ears.
Thanks a lot
Is this a bug in data.table::fread (version 1.9.2) or misplaced user expectation/error?
Consider this trivial example where I have a table of values, TAB separated with possibly missing values. If the values are missing in the first column, fread gets upset, but if missing values are elsewhere I return the data.table I expect:
# Data with missing value in first column, third row and last column, second row:
12 876 19
23 39
15 20
fread("12 876 19
23 39
15 20")
#Error in fread("12\t876\t19\n23\t39\t\n\t15\t20") :
# Not positioned correctly after testing format of header row. ch=' '
# Data with missing values last column, rows two and three:
"12 876 19
23 39
15 20 "
fread( "12 876 19
23 39
15 20 " )
# V1 V2 V3
#1: 12 876 19
#2: 23 39 NA
#3: 15 20 NA
# Returns as expected.
Is this a bug or is it not possible to have missing values in the first column (or do I have malformed data somehow?).
I believe this is the same bug that I reported here.
The most recent version that I know will work with this type of input is Rev. 1180. You could checkout and build that version by adding #1180 to the end of the svn checkout command.
svn checkout svn://svn.r-forge.r-project.org/svnroot/datatable/#1180
If you're not familiar with checking out and building packages, see here
But, a lot of great features, bug fixes, enhancements have been implemented since Rev. 1180. (The deveolpment version at the time of this writing is Rev. 1272). So, a better solution, is to replace the R/fread.R and src/fread.c files with the versions from Rev. 1180 or older, and then re-building the package.
You can find those files online without checking them out here (sorry, I can't figure out how to post links that include '*', so you have to copy/paste):
fread.R:
http://r-forge.r-project.org/scm/viewvc.php/*checkout*/pkg/R/fread.R?revision=988&root=datatable
fread.c:
http://r-forge.r-project.org/scm/viewvc.php/*checkout*/pkg/src/fread.c?revision=1159&root=datatable
Once you've rebuilt the package, you'll be able to read your tsv file.
> fread("12\t876\t19\n23\t39\t\n\t15\t20")
V1 V2 V3
1: 12 876 19
2: 23 39 NA
3: NA 15 20
The downside to doing this is that the old version of fread() does not pass a newer test -- you won't be able to read fields that have quotes in the middle.
> fread('A,B,C\n1.2,Foo"Bar,"a"b\"c"d"\nfo"o,bar,"b,az""\n')
Error in fread("A,B,C\n1.2,Foo\"Bar,\"a\"b\"c\"d\"\nfo\"o,bar,\"b,az\"\"\n") :
Not positioned correctly after testing format of header row. ch=','
With newer versions of fread, you would get this
> fread('A,B,C\n1.2,Foo"Bar,"a"b\"c"d"\nfo"o,bar,"b,az""\n')
A B C
1: 1.2 Foo"Bar a"b"c"d
2: fo"o bar b,az"
So, for now, which version "works" depends on whether you're more likely to have missing values in the first column, or quotes in fields. For me, it's the former, so I'm still using the old code.
I'm having trouble running a Friedman test over my data.
I'm trying to run a Friedman test using this command:
friedman.test(mean ~ isi | expId, data=monoSum)
On the following database (https://www.dropbox.com/s/2ox0y1b4gwld0ai/monoSum.csv):
> monoSum
expId isi N mean
1 m80B1 1 10 100.000000
2 m80B1 2 10 73.999819
3 m80B1 3 10 45.219362
4 m80B1 4 10 116.566174
. . . . .
18 m80L2 2 10 82.945491
19 m80L2 3 10 57.675480
20 m80L2 4 10 207.169277
. . . . . .
25 m80M2 1 10 100.000000
26 m80M2 2 10 49.752687
27 m80M2 3 10 19.042592
28 m80M2 4 10 150.411035
It gives me back the error:
Error in friedman.test.default(c(100, 73.9998193095267, 45.2193621626293, :
not an unreplicated complete block design
I figure it gives the error because, when monoSum$isi==1 the value of mean is always 100. Is this correct?
However, monoSum$isi==1 is alway 100 because it is the control group on which all the other monoSum$isi groups are normalized. I can not assume a normal distribution, so I cannot run a rmANOVA…
Is there a way to run a friedman test on this data or am I missing a very essential point here?
Many thanks in advance!
I don't get an error if I run your dataset:
Friedman rank sum test
data: mean and isi and expId
Friedman chi-squared = 17.9143, df = 3, p-value = 0.0004581
However, you have to make sure that expId and isi are coded as factors. Run these commands:
monoSum$expID$<-factor(monoSum$expID)
monoSum$isi$<-factor(monoSum$isi)
Then run the test again. This has worked for me with a similar problem.
I know this is pretty old but for future generations (see also: me when I forget and google this again):
You can determine what the missing values are in your dataframe by running table(groups, blocks) or in the case of this question table(monoSum$isi, monoSum$expID). This will return a table of 0s and 1s. This missing records are in the the cells with 0s.
I ran into this problem after trying to remove the blocks that had incomplete results; taking a subset of the data did not remove the blocks for some reason.
Just thought I would mention I found this post because I was getting a similar error message. The above suggestions did not solve it. Strangely, I had to sort my dataframe so that block by block the groups appeared in order (i.e. I could not have the following:
Block 1 A
Block 1 B
Block 2 B
Block 2 A
It has to appear as A, B, A, B)
I ran into the same cryptic error message in R, though in my case it was resolved when I applied the 'as.matrix' function to what was originally a dataframe for the CSV file I imported in using the read.csv() function.
I also had a missing data point in my original data set, and I found that when my data was transformed into a matrix for the friedman.test() call, the entire row containing the missing data point was omitted automatically.
Using the function as.matrix() to transform my dataframe is the magic that got the function to run for me.
I had this exact error too with my dataset.
It turns out that the function friedman.test() accepts data frames (fx those created by data.frame() ) but not tibbles (those created by dplyr and other modern tools). The solution for me was to convert my dataset to a dataframe first.
D_fri <- D_all %>% dplyr::select(FrustrationEpisode, Condition, Participant)
D_fri <- as.data.frame(D_fri)
str(D_fri) # confirm the object should now be a 'data.frame'
friedman.test(FrustrationEpisode ~ Condition | Participant, D_fri)
I ran into this problem too. Fixed mine by removing the NAs.
# My data (called layers) looks like:
| resp.no | av.l.all | av.baseem | av.base |
| 1 | 1.5 | 1.3 | 2.3 |
| 2 | 1.4 | 3.2 | 1.4 |
| 3 | 2.5 | 2.8 | 2.9 |
...
| 1088 | 3.6 | 1.1 | 3.3 |
# Remove NAs
layers1 <- na.omit(layers)
# Re-organise data so the scores are stacked, and a column added with the original column name as a factor
layers2 <- layers1 %>%
gather(key = "layertype", value = "score", av.l.all, av.baseem, av.base) %>%
convert_as_factor(resp.no, layertype)
# Data now looks like this
| resp.no | layertype | score |
| 1 | av.l.all | 1.5 |
| 1 | av.baseem | 1.3 |
| 1 | av.base | 2.3 |
| 2 | av.l.all | 1.4 |
...
| 1088 | av.base | 3.3 |
# Then do Friedman test
friedman.test(score ~ layertype | resp.no, data = layers2)
Just want to share what my problem was. My ID factor did not have correct levels after doing pivot_longer(). Because of this, the same error was given. I made sure the correct level and it worked by the following:as.factor(as.character(df$ID))
Reviving an old thread with new information. I ran into a similar problem after removing NAs. My group and block were factors before the NA removal. However, after removing NAs, the factors retained the levels before the removal even though some levels were no longer in the data!
Running the friedman.test() with the as.matrix() trick (e.g., friedman.test(a ~ b | c, as.matrix(df))) was fine but running frdAllPairsExactTest() or friedman_effsize() would throw the not an unreplicated complete block design error. I ended up re-factoring the group and block (i.e., dropping the levels that were no longer in the data, df$block <- factor(df$block)) to make things work. After the re-factor, I did not need the as.matrix() trick, either.
I'm running into a problem when trying to use the readHTMLTable function in the R package XML. When running
library(XML)
baseurl <- "http://www.pro-football-reference.com/teams/"
team <- "nwe"
year <- 2011
theurl <- paste(baseurl,team,"/",year,".htm",sep="")
readurl <- getURL(theurl)
readtable <- readHTMLTable(readurl)
I get the error message:
Error in names(ans) = header :
'names' attribute [27] must be the same length as the vector [21]
I'm running 64 bit R 2.15.1 through R Studio 0.96.330. It seems there are several other questions that have been asked about the readHTMLTable() function, but none addressed this specific question. Does anyone know what's going on?
When readHTMLTable() complains about the 'names' attribute, it's a good bet that it's having trouble matching the data with what it's parsed for header values. The simplest way around this is to simply turn off header parsing entirely:
table.list <- readHTMLTable(theurl, header=F)
Note that I changed the name of the return value from "readtable" to "table.list". (I also skipped the getURL() call since 1. it didn't work for me and 2. readHTMLTable() knows how to handle URLs). The reason for the change is that, without further direction, readHTMLTable() will hunt down and parse every HTML table it can find on the given page, returning a list containing a data.frame for each.
The page you have sent it after is fairly rich, with 8 separate tables:
> length(table.list)
[1] 8
If you were only interested in a single table on the page, you can use the which attribute to specify it and receive its contents as a data.frame directly.
This could also cure your original problem if it had choked on a table you're not interested in. Many pages still use tables for navigation, search boxes, etc., so it's worth taking a look at the page first.
But this is unlikely to be the case in your example since it actually choked on all but one of them. In the unlikely event that the stars aligned and you were only interested in the successfully-oarsed third table on the page (passing statistics) you could grab it like this, keeping header parsing on:
> passing.df = readHTMLTable(theurl, which=3)
> print(passing.df)
No. Age Pos G GS QBrec Cmp Att Cmp% Yds TD TD% Int Int% Lng Y/A AY/A Y/C Y/G Rate Sk Yds NY/A ANY/A Sk% 4QC GWD
1 12 Tom Brady* 34 QB 16 16 13-3-0 401 611 65.6 5235 39 6.4 12 2.0 99 8.6 9.0 13.1 327.2 105.6 32 173 7.9 8.2 5.0 2 3
2 8 Brian Hoyer 26 3 0 1 1 100.0 22 0 0.0 0 0.0 22 22.0 22.0 22.0 7.3 118.7 0 0 22.0 22.0 0.0
I have a dataset that I need to sort by participant (RECORDING_SESSION_LABEL) and by trial_number. However, when I sort the data using R none of the sort functions I have tried put the variables in the correct numeric order that I want. The participant variable comes out ok but the trial ID variable comes out in the wrong order for what I need.
using:
fix_rep[order(as.numeric(RECORDING_SESSION_LABEL), as.numeric(trial_number)),]
Participant ID comes out as:
118 118 118 etc. 211 211 211 etc. 306 306 306 etc.(which is fine)
trial_number comes out as:
1 1 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 2 2 20 20 .... (which is not what I want - it seems to be sorting lexically rather than numerically)
What I would like is trial_number to be order like this within each participant number:
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 ....
I have checked that these variables are not factors and are numeric and also tried without the 'as.numeric', but with no joy. Looking around I saw suggestions that sort() and mixedsort() might do the trick in place of 'order', both come up with errors. I am slowly pulling my hair out over what I think should be a simple thing. Can anybody help shed some light on how to do this to get what I need?
Even though you claim it is not a factor, it does behave exactly as if it were a factor. Testing if something is a factor can be tricky since a factor is just an integer vector with a levels attribute and a class label. If it is a factor, your code needs to have a call to as.character() nested inside the as.numeric():
fix_rep[order(as.numeric(RECORDING_SESSION_LABEL), as.numeric(as.character(trial_number))),]
To be really sure if it's a factor, I recommend the str() function:
str(trial_number)
I think it may be worthwhile for you to design your own function in this case. It wouldn't be too hard, basically you could just design a bubble-sort algorithm with a few alterations. These alterations could change each number to a string, and begin by sorting those with different numbers of digits into different bins (easily done by finding which numbers, which are now strings, have the greatest numbers of indices). Then, in a similar fashion, the numbers in these bins could be sorted by converting the least significant digit to a numeric type and checking to see which are the largest/smallest. If you're interested, I could come up with some code for this, however, it looks like the two above me have beat me to the punch with some of the built-in functions. I've never used those functions, so I'm not sure if they'll work as you intend, but there's no use in reinventing the wheel.