transpose 250,000 rows into columns in R - r

I always transpose by using t(file) command in R.
But i it is not running properly (not running at all) on big data file (250,000 rows and 200 columns). Any ideas.
I need to calculate correlation between 2nd row (PTBP1) with all other rows (except 8 rows including header). In order to do this I transpose rows to columns and then use cor function.
But I struck at transpose fn. Any help would be really appreciated!
I copied example from one of the post in stackoverflow (They are also almost discussing the same problem but seems no answer yet!)
ID A B C D E F G H I [200 columns]
Row0$-1 0.08 0.47 0.94 0.33 0.08 0.93 0.72 0.51 0.55
Row02$1 0.37 0.87 0.72 0.96 0.20 0.55 0.35 0.73 0.44
Row03$ 0.19 0.71 0.52 0.73 0.03 0.18 0.13 0.13 0.30
Row04$- 0.08 0.77 0.89 0.12 0.39 0.18 0.74 0.61 0.57
Row05$- 0.09 0.60 0.73 0.65 0.43 0.21 0.27 0.52 0.60
Row06-$ 0.60 0.54 0.70 0.56 0.49 0.94 0.23 0.80 0.63
Row07$- 0.02 0.33 0.05 0.90 0.48 0.47 0.51 0.36 0.26
Row08$_ 0.34 0.96 0.37 0.06 0.20 0.14 0.84 0.28 0.47
........
250,000 rows

Use a matrix instead. The only advantage of a dataframe over a matrix is the capacity to have different classes in the columns and you clearly do not have that situation, since a transposed dataframe could not support such a result.

I don't get why you want to transpose the data.frame. If you just use cor it doesn't matter if your data is in rows or columns.
Actually, it is one of the major advantages of R that it doen's matter if your data fits in the classical row-column pattern as SPSS and others programs require data to be.
There are numerous ways to correlate the first row with all other rows (I don't get which rows you want to exclude). One is using a loop (here the loop is implicit in the call to one of the *apply family functions):
lapply(2:(dim(fn)[1]), function(x) cor(fn[1,],fn[x,]))
Note that I expect you data.frame to ba called fn. To skip some rows change the 2 to the number you want. Furthermore, I would probably use vapply here.
I hope this answer points you in the correct direction and that is to not use t() if you absolutely don't need it.

Related

Creating an igraph from weighted correlation matrix csv

First of all, I'd like to say that I'm completely new to R, and I'm just trying to accomplish this one task.
So, what I'm trying to do is that I'd like to create an network diagram from a weighted matrix. I made an example:
The CSV is a simple correlation matrix that looks like this:
,A,B,C,D,E,F,G
A,1,0.9,0.64,0.43,0.38,0.33,0.33
B,0.9,1,0.64,0.33,0.43,0.38,0.38
C,0.64,0.64,1,0.59,0.69,0.64,0.64
D,0.43,0.33,0.59,1,0.28,0.23,0.28
E,0.38,0.43,0.69,0.28,1,0.95,0.9
F,0.33,0.38,0.64,0.23,0.95,1,0.9
G,0.33,0.38,0.64,0.28,0.9,0.9,1
I tried to draw the wanted result by myself and came up with this:
To be more precise, I draw the diagram first, then, using a ruler, I took note of the distances, calculated an equation to get the weights and made the CSV table.
The higher the value is, the closer the two points are to each other.
However, whatever I do, the best result I get is this:
And this is how I'm trying to accomplish it, using this tutorial:
First of all, I import my matrix:
> matrix <- read.csv(file = 'test_dataset.csv')
But after printing the matrix out with head(), this already somehow cuts the last line of the matrix:
> head(matrix)
ï.. A B C D E F G
1 A 1.00 0.90 0.64 0.43 0.38 0.33 0.33
2 B 0.90 1.00 0.64 0.33 0.43 0.38 0.38
3 C 0.64 0.64 1.00 0.59 0.69 0.64 0.64
4 D 0.43 0.33 0.59 1.00 0.28 0.23 0.28
5 E 0.38 0.43 0.69 0.28 1.00 0.95 0.90
6 F 0.33 0.38 0.64 0.23 0.95 1.00 0.90
> dim(matrix)
[1] 7 8
I then proceed with removing the first column so the matrix is square again...
> matrix <- data.matrix(matrix)[,-1]
> head(matrix)
A B C D E F G
[1,] 1.00 0.90 0.64 0.43 0.38 0.33 0.33
[2,] 0.90 1.00 0.64 0.33 0.43 0.38 0.38
[3,] 0.64 0.64 1.00 0.59 0.69 0.64 0.64
[4,] 0.43 0.33 0.59 1.00 0.28 0.23 0.28
[5,] 0.38 0.43 0.69 0.28 1.00 0.95 0.90
[6,] 0.33 0.38 0.64 0.23 0.95 1.00 0.90
> dim(matrix)
[1] 7 7
Then I create the graph and try to plot it:
> network <- graph_from_adjacency_matrix(matrix, weighted=T, mode="undirected", diag=F)
> plot(network)
And the result above appears...
So, after spending the last few hours googling and trying way, way more things, this is the closest I've been able to get to.
So I'm asking for your help, thank you very much!
This is all fine.
head() just prints out the first 6 rows of a matrix or dataframe, if you want to see all of it use print() or just the name of the matrix variable.
graph_from_adjacency_matrix produces a link between two nodes if the value is non-zero. That's why you are getting every node linked to every other node.
To get what that tutorial is doing you need to add a line like
matrix[matrix<0.5] <- 0
to remove the edges for correlations below a cut off before you create the graph.
It's still not going to produce a chart like your hand drawn one (where closeness is roughly the correlation), just clump them together if they are above 0.5 correlation.

R Function to get Confidence Interval of Difference Between Means

I am trying find a function that allows me two easily get the confidence interval of difference between two means.
I am pretty sure t.test has this functionality, but I haven't been able to make it work. Below is a screenshot of what I have tried so far:
Image
This is the dataset I am using
Indoor Outdoor
1 0.07 0.29
2 0.08 0.68
3 0.09 0.47
4 0.12 0.54
5 0.12 0.97
6 0.12 0.35
7 0.13 0.49
8 0.14 0.84
9 0.15 0.86
10 0.15 0.28
11 0.17 0.32
12 0.17 0.32
13 0.18 1.55
14 0.18 0.66
15 0.18 0.29
16 0.18 0.21
17 0.19 1.02
18 0.20 1.59
19 0.22 0.90
20 0.22 0.52
21 0.23 0.12
22 0.23 0.54
23 0.25 0.88
24 0.26 0.49
25 0.28 1.24
26 0.28 0.48
27 0.29 0.27
28 0.34 0.37
29 0.39 1.26
30 0.40 0.70
31 0.45 0.76
32 0.54 0.99
33 0.62 0.36
and I have been trying to use t.test function that has been installed from
install.packages("ggpubr")
I am pretty new to R, so sorry if there is a simple answer to this question. I have searched around quite a bit and haven't been able to find anything that I am looking for.
Note: The output I am looking for is Between -1.224 and 0.376
Edit:
The CI of difference between means I am looking for is if a random 34th datapoint was added to the chart by picking a random value in the Indoor column and a random value in the Outdoor column and duplicating it. Running the t.test will output the correct CI for the difference of means for the given sample size of 33.
How can I go about doing this pretending the sample size is 34?
there's probably something more convenient in the standard library, but it's pretty easy to calculate. given your df variable, we can just do:
# calculate mean of difference
d_mu <- mean(df$Indoor) - mean(df$Outdoor)
# calculate SD of difference
d_sd <- sqrt(var(df$Indoor) + var(df$Outdoor))
# calculate 95% CI of this
d_mu + d_sd * qt(c(0.025, 0.975), nrow(df)*2)
giving me: -1.2246 0.3767
mostly for #AkselA: I often find it helpful to check my work by sampling simpler distributions, in this case I'd do something like:
a <- mean(df$Indoor) + sd(df$Indoor) * rt(1000000, nrow(df)-1)
b <- mean(df$Outdoor) + sd(df$Outdoor) * rt(1000000, nrow(df)-1)
quantile(a - b, c(0.025, 0.975))
which gives me answers much closer to the CI I gave in the comment
Even though I always find the approach of manually calculating the results, as shown by #Sam Mason, the most insightful, there are some who want a shortcut. And sometimes, it's also ok to be lazy :)
So among the different ways to calculate CIs, this is imho the most comfortable:
DescTools::MeanDiffCI(Indoor, Outdoor)
Here's a reprex:
IV <- diamonds$price
DV <- rnorm(length(IV), mean = mean(IV), sd = sd(IV))
DescTools::MeanDiffCI(IV, DV)
gives
meandiff lwr.ci upr.ci
-18.94825 -66.51845 28.62195
This is calculated with 999 bootstrapped samples by default. If you want 1000 or more, you can just add that in the argument R:
DescTools::MeanDiffCI(IV, DV, R = 1000)

Drop down or input implementation in a R Script

I am still fairly new to writing code, especially in R and I am trying to find simple ways to expedite the process of running my code.
If this is my data
Month X Y Z
Jan 0.73 0.15 0.57
Feb 0.69 0.35 0.97
April 0.62 0.72 0.25
Jan 1.00 0.80 0.60
Oct 0.49 0.03 0.09
Feb 0.46 0.09 0.99
Aug 0.29 0.35 0.66
Mar 0.64 0.46 0.66
Dec 0.29 0.67 0.38
Dec 0.12 0.82 0.35
Jan 1.00 0.84 0.23
Mar 0.64 0.83 0.30
Is it possible to create a code for my script that creates a message or input or drop down list box, which can help filter the data based on a column. For example, I would like to create a new data frame that only has the information for the month "Jan" and have it filtered through an drop down list as the code runs.
Thank you
Dropdown menus are unfortunately not optimal in R. Also i recommend the package data.table for sorting and filtering.
library(data.table)
Data <- data.table(Data)
To filter out only "Jan"
New.Data <- Data[Month=="Jan"]
To sort by lets say X
New.Data[order(as.numeric(X))]
If you provide more detail about what kinds of sorting and filtering you want to do i might be able to provide a more thorough answer.

Unable to tweak the findAssocs() in tm package in R

I was trying to find associations between top 10 frequent words with the rest of the frequent words int the input text.
When I look at the individual output of findAssocs():
findAssocs(dtm, "good", corlimit=0.4)
It gives the output clearly by printing the word 'good' with which associations have been sought.
$good
better got hook next content fit person
0.44 0.44 0.44 0.44 0.43 0.43 0.43
But when I try to automate this process for a character vector having top 10 words:
t10 <- c("busi", "entertain", "topic", "interact", "track", "content", "paper", "media", "game", "good")
the output is a list of correlations for each of those elements BUT WITHOUT THE WORD WITH WHICH THE ASSOCIATIONS HAVE BEEN SOUGHT. The sample output is as below (plz notice that the word at t10[i] is not printed, unlike the above individual output where 'good' was clearly printed):
for(i in 1:10) {
t10_words[i] <- as.list(findAssocs(dtm, t10[i], corlimit=0.4))
}
> t10_words
[[1]]
littl descript disrupt enter model
0.50 0.48 0.48 0.48 0.48
[[2]]
immers anyth effect full holodeck iot problem say startrek such suspect wow
0.68 0.48 0.48 0.48 0.48 0.48 0.48 0.48 0.48 0.48 0.48 0.48
[[3]]
area captur give overal like alon avid begin
0.51 0.47 0.47 0.47 0.44 0.43 0.43 0.43
circuit cloud collaboration communic communiti concis confus defin
0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43
discord doesnt drop enablesupport esport event everi everyon
0.43 0.43 0.43 0.43 0.43 0.43 0.43 0.43
How do I print the output along with the actual association word?
Can somebody please help me with this??
Thanks.
After running your for loop, add the following piece of code:
names(t10_words) <- t10
This will name the lists with the words specified in t10.

Remove selected cases based on a criteria

I would like to remove cases from a data frame based on whether they contain a particular pattern. For example in the data frame below I would like to remove all the rows that contain (Intercept), iyeareducc, ibphtdep and gender_R22 (or alternatively selecting the rows that contain _carrier1 or adri).
OR CI P
apoee4_carrier.(Intercept) 1.96 0.97-3.94 0.06
apoee4_carrier.apoee4_carrier1 1.03 0.77-1.37 0.84
apoee4_carrier.iyeareducc 0.86 0.82-0.9 0.00
apoee4_carrier.ibphdtdep 1.01 0.96-1.05 0.81
apoee4_carrier.gender_R22 0.87 0.67-1.12 0.28
BDNF_carrier.(Intercept) 2.05 1.01-4.14 0.04
BDNF_carrier.BDNF_carrier1 0.87 0.66-1.14 0.33
BDNF_carrier.iyeareducc 0.86 0.82-0.9 0.00
BDNF_carrier.ibphdtdep 1.00 0.96-1.05 0.82
BDNF_carrier.gender_R22 0.87 0.67-1.12 0.28
adri.(Intercept) 1.60 0.78-3.31 0.20
adri.adri 1.03 1-1.06 0.04
adri.iyeareducc 0.89 0.84-0.94 0.00
adri.ibphdtdep 1.00 0.95-1.04 0.87
adri.gender_R22 0.87 0.67-1.12 0.27
While I could use a sequence to subset out the rows I require, like so
dat[(seq(2,nrow(dat),5)),]
OR CI P
apoee4_carrier.apoee4_carrier1 1.03 0.77-1.37 0.84
BDNF_carrier.BDNF_carrier1 0.87 0.66-1.14 0.33
adri.adri 1.03 1-1.06 0.04
this will only work if the sequence is the same throughout the entire dataframe, which may not be necessarily the case as this data frame is created from a list of data frames that have been rbind together.
Thanks.
You can use grep to select the rows you want/don't want:
dat[-grep("Intercept|iyeareducc|ibphdtdep|gender", rownames(dat)),]
grep returns the row numbers of the rows for which the row names contain at least one of your search strings (the | between each string means "OR"). Putting a minus sign in front of grep tells R to return only the rows of dat that are not returned by grep.

Resources