Convert non-numeric rows and columns to zero - r

I have this data from an r package, where X is the dataset with all the data
library(ISLR)
data("Hitters")
X=Hitters
head(X)
here is one part of the data:
AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun CRuns CRBI CWalks League Division PutOuts Assists Errors Salary NewLeague
-Andy Allanson 293 66 1 30 29 14 1 293 66 1 30 29 14 A E 446 33 20 NA A
-Alan Ashby 315 81 7 24 38 39 14 3449 835 69 321 414 375 N W 632 43 10 475.0 N
-Alvin Davis 479 130 18 66 72 76 3 1624 457 63 224 266 263 A W 880 82 14 480.0 A
-Andre Dawson 496 141 20 65 78 37 11 5628 1575 225 828 838 354 N E 200 11 3 500.0 N
-Andres Galarraga 321 87 10 39 42 30 2 396 101 12 48 46 33 N E 805 40 4 91.5 N
-Alfredo Griffin 594 169 4 74 51 35 11 4408 1133 19 501 336 194 A W 282 421 25 750.0 A
I want to convert all the columns and the rows with non numeric values to zero, is there any simple way to do this.
I found here an example how to remove the rows for one column just but for more I have to do it for every column manually.
Is in r any function that does this for all columns and rows?

To remove non-numeric columns, perhaps something like this?
df %>%
select(which(sapply(., is.numeric)))
# AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun
#-Andy Allanson 293 66 1 30 29 14 1 293 66 1
#-Alan Ashby 315 81 7 24 38 39 14 3449 835 69
#-Alvin Davis 479 130 18 66 72 76 3 1624 457 63
#-Andre Dawson 496 141 20 65 78 37 11 5628 1575 225
#-Andres Galarraga 321 87 10 39 42 30 2 396 101 12
#-Alfredo Griffin 594 169 4 74 51 35 11 4408 1133 19
# CRuns CRBI CWalks PutOuts Assists Errors Salary
#-Andy Allanson 30 29 14 446 33 20 NA
#-Alan Ashby 321 414 375 632 43 10 475.0
#-Alvin Davis 224 266 263 880 82 14 480.0
#-Andre Dawson 828 838 354 200 11 3 500.0
#-Andres Galarraga 48 46 33 805 40 4 91.5
#-Alfredo Griffin 501 336 194 282 421 25 750.0
or
df %>%
select(-which(sapply(., function(x) is.character(x) | is.factor(x))))
Or much neater (thanks to #AntoniosK):
df %>% select_if(is.numeric)
Update
To additionally replace NAs with 0, you can do
df %>% select_if(is.numeric) %>% replace(is.na(.), 0)
# AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun
#-Andy Allanson 293 66 1 30 29 14 1 293 66 1
#-Alan Ashby 315 81 7 24 38 39 14 3449 835 69
#-Alvin Davis 479 130 18 66 72 76 3 1624 457 63
#-Andre Dawson 496 141 20 65 78 37 11 5628 1575 225
#-Andres Galarraga 321 87 10 39 42 30 2 396 101 12
#-Alfredo Griffin 594 169 4 74 51 35 11 4408 1133 19
# CRuns CRBI CWalks PutOuts Assists Errors Salary
#-Andy Allanson 30 29 14 446 33 20 0.0
#-Alan Ashby 321 414 375 632 43 10 475.0
#-Alvin Davis 224 266 263 880 82 14 480.0
#-Andre Dawson 828 838 354 200 11 3 500.0
#-Andres Galarraga 48 46 33 805 40 4 91.5
#-Alfredo Griffin 501 336 194 282 421 25 750.0

library(ISLR)
data("Hitters")
d = head(Hitters)
library(dplyr)
d %>%
mutate_if(function(x) !is.numeric(x), function(x) 0) %>% # if column is non numeric add zeros
mutate_all(function(x) ifelse(is.na(x), 0, x)) # if there is an NA element replace it with 0
# AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits CHmRun CRuns CRBI CWalks League Division PutOuts Assists Errors Salary NewLeague
# 1 293 66 1 30 29 14 1 293 66 1 30 29 14 0 0 446 33 20 0.0 0
# 2 315 81 7 24 38 39 14 3449 835 69 321 414 375 0 0 632 43 10 475.0 0
# 3 479 130 18 66 72 76 3 1624 457 63 224 266 263 0 0 880 82 14 480.0 0
# 4 496 141 20 65 78 37 11 5628 1575 225 828 838 354 0 0 200 11 3 500.0 0
# 5 321 87 10 39 42 30 2 396 101 12 48 46 33 0 0 805 40 4 91.5 0
# 6 594 169 4 74 51 35 11 4408 1133 19 501 336 194 0 0 282 421 25 750.0 0
If you want to avoid function(x) you can use this
d %>%
mutate_if(Negate(is.numeric), ~0) %>%
mutate_all(~ifelse(is.na(.), 0, .))

You can get the numeric columns with sapply/inherits.
X <- Hitters
inx <- sapply(X, inherits, c("integer", "numeric"))
Y <- X[inx]
Then, it wouldn't make much sense to remove the rows with non-numeric entries, they were already removed, but you could do
inx <- apply(Y, 1, function(y) all(inherits(y, c("integer", "numeric"))))
Y[inx, ]

Related

I can't make a DESeq2 analysis

I need to make a DESeq2 analysis with my dataset for an homework, but I'm really new with this package (I never used it before).
When I want to make a
counts <- read.table("ProstateCancerCountData.txt",sep="", header=TRUE, row.names=1)
metadat<- read.table("mart_export.txt",sep=",", header=TRUE, row.names=1)
counts <- as.matrix(counts)
dds <- DESeqDataSetFromMatrix(countData = counts, colData = metadat, design = ~ GC.content+ Gene.type)
I have this error :
Erreur dans DESeqDataSetFromMatrix(countData = counts, colData = metadat, :
ncol(countData) == nrow(colData) n'est pas TRUE
I don't know how to fix it.
This is the two dataset I have to used for the analysis :
head(counts)
N_10 T_10 N_11 T_12 N_13 T_13 N_14 T_14 N_1 T_1 N_2 T_2 N_3
ENSG00000000003 401 442 1155 1095 788 754 852 938 774 520 808 648 891
ENSG00000000005 0 7 23 9 5 2 45 5 11 10 56 8 7
ENSG00000000419 112 96 424 468 385 452 751 491 247 222 509 363 706
ENSG00000000457 13 121 327 165 40 204 290 199 70 121 104 151 352
ENSG00000000460 24 66 162 137 71 159 174 156 86 94 120 91 166
ENSG00000000938 96 128 218 372 126 129 538 320 117 129 157 238 177
T_3 N_4 N_5 T_6 N_7 T_7 N_8 T_8 N_9 T_9
ENSG00000000003 1071 2059 737 1006 1146 653 1299 1306 1522 490
ENSG00000000005 0 18 0 7 1 4 1 2 0 3
ENSG00000000419 622 988 307 402 294 323 535 518 573 322
ENSG00000000457 333 328 58 153 138 115 179 200 86 85
ENSG00000000460 152 162 100 100 101 148 128 78 83 109
ENSG00000000938 86 113 410 230 64 76 93 61 121 68
head(metadat)
Chromosome.scaffold.name Gene.start..bp. Gene.end..bp.
ENSG00000271782 1 50902700 50902978
ENSG00000232753 1 103817769 103828355
ENSG00000225767 1 50927141 50936822
ENSG00000202140 1 50965430 50965529
ENSG00000207194 1 51048076 51048183
ENSG00000252825 1 51215968 51216025
GC.content Gene.type
ENSG00000271782 35.48 lincRNA
ENSG00000232753 33.99 lincRNA
ENSG00000225767 38.99 antisense
ENSG00000202140 43.00 misc_RNA
ENSG00000207194 37.96 snRNA
ENSG00000252825 36.21 snRNA
Thank you for your help, and for your lighting
EDIT :
Thank you for your previous answer.
I take an another dataset to make this homework. But I have another bug :
This is my new dataset :
head(mycounts)
R1L1Kidney R1L2Liver R1L3Kidney R1L4Liver R1L6Liver
ENSG00000177757 2 1 0 0 1
ENSG00000187634 49 27 43 34 23
ENSG00000188976 73 34 77 56 45
ENSG00000187961 15 8 15 13 11
ENSG00000187583 1 0 1 1 0
ENSG00000187642 4 0 5 0 2
R1L7Kidney R1L8Liver R2L2Kidney R2L3Liver R2L6Kidney
ENSG00000177757 2 0 1 1 3
ENSG00000187634 41 35 42 25 47
ENSG00000188976 68 55 70 42 82
ENSG00000187961 13 12 12 20 15
ENSG00000187583 3 0 0 2 3
ENSG00000187642 12 1 9 4 9
head(myfactors)
Tissue TissueRun
R1L1Kidney Kidney Kidney_1
R1L2Liver Liver Liver_1
R1L3Kidney Kidney Kidney_1
R1L4Liver Liver Liver_1
R1L6Liver Liver Liver_1
R1L7Kidney Kidney Kidney_1
When I code my DESeq object, I would take the Tissue and TissueRun for take care of the batch. But I have an error :
dds2 <- DESeqDataSetFromMatrix(countData = mycounts, colData = myfactors, design = ~ Tissue + TissueRun)
Error in checkFullRank(modelMatrix) :
the model matrix is not full rank, so the model cannot be fit as specified.
One or more variables or interaction terms in the design formula are linear
combinations of the others and must be removed.
Please read the vignette section 'Model matrix not full rank':
vignette('DESeq2')
Thank you for your help

r - lapply/loop or outer with two indices

I have a dataset v1 where I want to get data of certain grid boxes.
Here's an extract from v1:
"V1" "V2" "V3" "V4" "V5" "V6" "V7" "V8" "V9" "V10" "V11" "V12" "V13" "V14" "V15" "V16" "V17" "V18"
43 1 0 69 60 9 19501201 1080 0 1 641 30 0 291 272 136 29 3650
43 1 1 69 60 9 19501201 884 0 1 705 30 3 290 293 136 29 3650
43 1 2 70 61 9 19501201 553 293 1 1090 30 6 264 468 138 31 3650
43 1 3 71 62 9 19501201 416 290 1 1240 30 9 303 503 140 33 3650
43 1 4 72 63 9 19501201 396 287 1 1160 30 12 334 444 142 35 3650
43 1 5 73 64 9 19501201 163 285 1 1440 30 15 377 687 144 37 3650
43 1 6 74 66 9 19501201 29 475 1 1490 30 18 386 674 146 41 3650
43 1 7 74 67 9 19501201 -257 222 1 1960 30 21 444 875 146 43 3650
43 1 8 74 68 9 19501202 -216 222 1 1850 30 0 438 806 146 45 3650
43 1 9 74 69 9 19501202 -393 222 1 1950 30 3 444 847 146 47 3650
43 1 10 74 70 9 19501202 -500 222 1 2130 30 6 457 901 146 49 3650
The list "v1" has the columns longitudes (V16) and the latitudes (V17) of the boundary conditions you see below.
For example, I need to filter between 80°W-30°E (V16) and 25°N-75°N (V17) by boxes of 5° each.
I want to keep all other columns from the filtered-out box.
These are my boundary conditions:
lon1_i <- seq(-80,25, by=5)
lon2_i <- seq(-75,30, by=5)
lat1_i <- seq(25,70, by=5)
lat2_i <- seq(30,75, by=5)
So the first grid box has all the info in -80° to -75° and 25°-30°, then the second box contains the data from -75° to -70° and 30°-35°. And so on until the last box of 25°-30°E and 70°-75°N.
I tried to use a for loop with two indices:
for (i in 1:22) {
for(k in 1:10) {
test[[i]][[k]] <- v1 %>%
filter(between(V16, lon1_i[[i]], lon2_i[[i]]), between(V17, lat1_i[[k]], lat2_i[[k]])) %>%
group_by(group = cumsum(V3 == 0))
}
}
And with outer:
test <- outer(seq(lon1_i),seq(lon2_i),seq(lat1_i),seq(lat2_i),
function(i,j) v1 %>%
filter(between(V16, lon1_i[i], lon2_i[i]),
between(V17, lat1_i[j], lat2_i[j])) %>%
group_by(group = cumsum(V3 == 0)))
Also lapply:
test <- lapply(seq(22,10),function(x) v1 %>%
filter(between(V16, lon1_i[x], lon2_i[x]), between(V17, lat1_i[x], lat2_i[x])) %>%
group_by(group = cumsum(V3 == 0)))
The output should be in the form of new data tables/lists so I guess 22x10 from my chosen coordinates.
Is it possible with these functions/types of loops? I would much appreciate some help on this. Thanks!
Looks like you have a list of points in test and you have a list of areas describing the boundaries. I would use spatial joining for filtering a table of points e.g. using function st_within of R package sf

Split a data.frame into n random groups with x rows each

Assume a data.frame as follows:
df <- data.frame(name = paste0("Person",rep(1:30)),
number = sample(1:100, 30, replace=TRUE),
focus = sample(1:500, 30, replace=TRUE))
I want to split the above data.frame into 9 groups, each with 9 observations. Each person can be assigned to multiple groups (replacement), so that all 9 groups have all 10 observations (since 9 groups x 9 observations require 81 rows while the df has only 30).
The output will ideally be a large list of 1000 data.frames.
Are there any efficient ways of doing this? This is just a sample data.frame. The actual df has ~10k rows and will require 1000 groups each with 30 rows.
Many thanks.
Is this what you are looking for?
res <- replicate(1000, df[sample.int(nrow(df), 30, TRUE), ], FALSE)
df I used
df <- data.frame(name = paste0("Person",rep(1:1e4)),
number = sample(1:100, 1e4, replace=TRUE),
focus = sample(1:500, 1e4, replace=TRUE))
Output
> res[1:3]
[[1]]
name number focus
529 Person529 5 351
9327 Person9327 4 320
1289 Person1289 78 164
8157 Person8157 46 183
6939 Person6939 38 61
4066 Person4066 26 103
132 Person132 34 39
6576 Person6576 36 397
5376 Person5376 47 456
6123 Person6123 10 18
5318 Person5318 39 42
6355 Person6355 62 212
340 Person340 90 256
7050 Person7050 19 198
1500 Person1500 42 208
175 Person175 34 30
3751 Person3751 99 441
3813 Person3813 93 492
7428 Person7428 72 142
6840 Person6840 58 45
6501 Person6501 95 499
5124 Person5124 16 159
3373 Person3373 38 36
5622 Person5622 40 203
8761 Person8761 9 225
6252 Person6252 75 444
4502 Person4502 58 337
5344 Person5344 24 233
4036 Person4036 59 265
8764 Person8764 45 1
[[2]]
name number focus
8568 Person8568 87 360
3968 Person3968 67 468
4481 Person4481 46 140
8055 Person8055 73 286
7794 Person7794 92 336
1110 Person1110 6 434
6736 Person6736 4 58
9758 Person9758 60 49
9356 Person9356 89 300
9719 Person9719 100 366
4183 Person4183 5 124
1394 Person1394 87 346
2642 Person2642 81 449
3592 Person3592 65 358
579 Person579 21 395
9551 Person9551 39 495
4946 Person4946 73 32
4081 Person4081 98 270
4062 Person4062 27 150
7698 Person7698 52 436
5388 Person5388 89 177
9598 Person9598 91 474
8624 Person8624 3 464
392 Person392 82 483
5710 Person5710 43 293
4942 Person4942 99 350
3333 Person3333 89 91
6789 Person6789 99 259
7115 Person7115 100 320
1431 Person1431 77 263
[[3]]
name number focus
201 Person201 100 272
4674 Person4674 27 410
9728 Person9728 18 275
9422 Person9422 2 396
9783 Person9783 45 37
5552 Person5552 76 109
3871 Person3871 49 277
3411 Person3411 64 24
5799 Person5799 29 131
626 Person626 31 122
3103 Person3103 2 76
8043 Person8043 90 384
3157 Person3157 90 392
7093 Person7093 11 169
2779 Person2779 83 2
2601 Person2601 77 122
9003 Person9003 50 163
9653 Person9653 4 235
9361 Person9361 100 391
4273 Person4273 83 383
4725 Person4725 35 436
2157 Person2157 71 486
3995 Person3995 25 258
3735 Person3735 24 221
303 Person303 81 407
4838 Person4838 64 198
6926 Person6926 90 417
6267 Person6267 82 284
8570 Person8570 67 317
2670 Person2670 21 342

Merge rows with duplicate IDs

I would like to merge and sum the values of each row that contains duplicated IDs.
For example, the data frame below contains a duplicated symbol 'LOC102723897'. I would like to merge these two rows and sum the value within each column, so that one row appears for the duplicated symbol.
> head(y$genes)
SM01 SM02 SM03 SM04 SM05 SM06 SM07 SM08 SM09 SM10 SM11 SM12 SM13 SM14 SM15 SM16 SM17 SM18 SM19 SM20 SM21 SM22
1 32 29 23 20 27 105 80 64 83 80 94 58 122 76 78 70 34 32 45 42 138 30
2 246 568 437 343 304 291 542 457 608 433 218 329 483 376 410 296 550 533 537 473 296 382
3 30 23 30 13 20 18 23 13 31 11 15 27 36 21 23 25 26 27 37 27 31 16
4 1450 2716 2670 2919 2444 1668 2923 2318 3867 2084 1121 2175 3022 2308 2541 1613 2196 1851 2843 2078 2180 1902
5 288 366 327 334 314 267 550 410 642 475 219 414 679 420 425 308 359 406 550 398 399 268
6 34 59 62 68 42 31 49 45 62 51 40 32 30 39 41 75 54 59 83 99 37 37
SM23 SM24 SM25 SM26 SM27 SM28 SM29 SM30 Symbol
1 41 23 57 160 84 67 87 113 LOC102723897
2 423 535 624 304 568 495 584 603 LINC01128
3 31 21 49 13 33 31 14 31 LINC00115
4 2453 3041 3590 2343 3450 3725 3336 3850 NOC2L
5 403 347 468 478 502 563 611 577 LOC102723897
6 45 51 56 107 79 105 92 131 PLEKHN1
> dim(y)
[1] 12928 30
I attempted using plyr to merge rows based on the 'Symbol' column, but it's not working.
> ddply(y$genes,"Symbol",numcolwise(sum))
> dim(y)
[1] 12928 30
> length(y$genes$Symbol)
[1] 12928
> length(unique(y$genes$Symbol))
[1] 12896
You group-by on Symbol and sum all columns.
library(dplyr)
df %>% group_by(Symbol) %>% summarise_all(sum)
using data.table
library(data.table)
setDT(df)[ , lapply(.SD, sum),by="Symbol"]
We can just use aggregate from base R
aggregate(.~ Symbol, df, FUN = sum)

Technical error of measurement in between two columns

I have the following data frame:
data_2
sex age seca1 chad1 DL alog1 dig1 scifirst1 crimetech1
1 F 19 1800 1797 180 70 69 421 424
2 F 19 1682 1670 167 69 69 421 423
3 F 21 1765 1765 178 80 81 421 423
4 F 21 1829 1833 181 74 72 421 419
5 F 21 1706 1705 170 103 101 439 440
6 F 18 1607 1606 160 76 76 440 439
7 F 19 1578 1576 156 50 48 422 422
8 F 19 1577 1575 156 61 61 439 441
9 F 21 1666 1665 166 52 51 439 441
10 F 17 1710 1716 172 65 65 420 420
11 F 28 1616 1619 161 66 65 426 428
12 F 22 1648 1644 165 58 57 426 429
13 F 19 1569 1570 155 55 54 419 420
14 F 19 1779 1777 177 55 54 422 422
15 M 18 1773 1772 179 70 69 420 419
16 M 18 1816 1809 181 81 80 442 440
17 M 19 1766 1765 178 77 76 425 425
18 M 19 1745 1741 174 76 76 421 423
19 M 18 1716 1714 170 71 70 445 446
20 M 21 1785 1783 179 64 63 446 445
21 M 19 1850 1854 185 71 72 422 421
22 M 31 1875 1880 188 95 95 419 420
23 M 26 1877 1877 186 106 106 420 420
24 M 19 1836 1837 185 100 100 426 423
25 M 18 1825 1823 182 85 85 444 439
26 M 19 1755 1754 174 79 78 420 419
27 M 26 1658 1658 165 69 69 421 421
28 M 20 1816 1818 183 84 83 439 440
29 M 18 1755 1755 175 67 67 429 422
I wish to compute the technical error measurement (TEM) between " alog1 " and " dig1 ", which has the following formula:
TEM= √(D/2n)
Where D is the sum of the differences between alog1 and dig1 squared and n is 29
I'm not sure how to compute the sum of the differences squared between the two columns in the first place. Please help.
Probably with
n <- 29
TEM <- sqrt((data_2$alog1-data_2$dig1)^2/2*n)
data_3 <- cbind(data_2, TEM) #To bind it to the table and create the output table 3
Check the formula of TEM maybe I didn't understand it correctly.

Resources