Let's make a dummy dataset
ll = data.frame(rbind(c(2,3,5), c(3,4,6), c(9,4,9)))
colnames(ll)<-c("b", "c", "a")
> ll
b c a
1 2 3 5
2 3 4 6
3 9 4 9
P = data.frame(cbind(c(3,5), c(4,6), c(8,7)))
colnames(P)<-c("a", "b", "c")
> P
a b c
1 3 4 8
2 5 6 7
I want to create a new dataframe where the values in each column of ll would be turned into 0 when it is less than corresponding values of a,b, & c in the first row of P; in other words, I'd like to see
> new_ll
b c a
1 0 0 5
2 0 0 6
3 9 0 9
so I tried it this way
nn=c("a", "b", "c")
new_ll = sapply(nn, function(i)
ll[,paste0(i)][ll[,paste0(i)] < P[,paste0(i)][1]] <- 0)
But it doesn't work for some reason! I must be doing a silly mistake in my script!! Any idea?
> new_ll
a b c
0 0 0
You can find the values in ll that are smaller than the first row of P with an apply:
t(apply(ll, 1, function(x) x<P[1,][colnames(ll)]))
[,1] [,2] [,3]
[1,] TRUE TRUE FALSE
[2,] TRUE TRUE FALSE
[3,] FALSE TRUE FALSE
Here, the first row of P is ordered to match ll, then the elements are compared.
Credit to Ananda Mahto for recognizing that apply is not required:
ll < c(P[1, names(ll)])
b c a
[1,] TRUE TRUE FALSE
[2,] TRUE TRUE FALSE
[3,] FALSE TRUE FALSE
The TRUE values show where you want to substitute with 0:
ll[ ll < c(P[1, names(ll)]) ] <- 0
ll
b c a
1 0 0 5
2 0 0 6
3 9 0 9
To fix your code, you want something like this:
do.call(cbind, lapply(names(ll), function(i) {
ll[,i][ll[,i] < P[,i][1]] <- 0
return(ll[i])}))
b c a
1 0 0 5
2 0 0 6
3 9 0 9
What's changed? First, sapply is changed to lapply and the function returns a vector for each iteration. Second, the names are presented in the correct order for the expected results. Third, the results are put together with cbind to get the final matrix. As a bonus, the redundant calls to paste0 have been removed.
You could also try mapply, which applies the function to the each corresponding element. Here, the ll and P are both data.frames. So, it applies the function for each column and does the recycling also. Here, I matched the column names of P with that of ll (similar to #Matthew Lundberg) and looked for which elements of ll in each column is < than the corresponding column (the one row of P gets recycled) and returns a logical index. Then the elements that matches the logical condition are assigned to 0.
indx <- mapply(`<`, ll, P[1,][names(ll)])
new_ll <- ll
new_ll[indx] <- 0
new_ll
# b c a
#1 0 0 5
#2 0 0 6
#3 9 0 9
In case you know that ll and P are numeric you can do it also as
llm <- as.matrix(ll)
pv <- as.numeric(P[1, colnames(llm)])
llm[sweep(llm, 2, pv, `<=`)] <- 0
data.frame(llm)
# b c a
# 1 0 0 5
# 2 0 0 6
# 3 9 0 9
Related
I'm cleaning up some survey data in R; assigning variables 1,0 based on the responses to a question. Say I had a question with 3 options; a,b,c; and I had a data frame with the responses and logical variables:
df <- data.frame(a = rep(0,3), b = rep(0,3), c = rep(0,3), response = I(list(c(1),c(1,2),c(2,3))))
So I want to change the 0's to 1's if the response matches the column index (ie 1=a, 2=b, 3=c).
This is fairly easy to do with a loop:
for (i in 1:nrow(df2)) df2[i,df2[i,"response"][[1]]] <- 1
Is there any way to do this with an apply/lapply/sapply/etc? Something like:
df <- sapply(df,function(x) x[x["response"][[1]]] <- 1)
Or should I stick with a loop?
You can use matrix indexing, from ?[:
A third form of indexing is via a numeric matrix with the one column
for each dimension: each row of the index matrix then selects a single
element of the array, and the result is a vector. Negative indices are
not allowed in the index matrix. NA and zero values are allowed: rows
of an index matrix containing a zero are ignored, whereas rows
containing an NA produce an NA in the result.
# construct a matrix representing the index where the value should be one
idx <- with(df, cbind(rep(seq_along(response), lengths(response)), unlist(response)))
idx
# [,1] [,2]
#[1,] 1 1
#[2,] 2 1
#[3,] 2 2
#[4,] 3 2
#[5,] 3 3
# do the assignment
df[idx] <- 1
df
# a b c response
#1 1 0 0 1
#2 1 1 0 1, 2
#3 0 1 1 2, 3
or you can try this .
library(tidyr)
library(dplyr)
df1=df %>%mutate(Id=row_number()) %>%unnest(response)
df[,1:3]=table(df1$Id,df1$response)
a b c response
1 1 0 0 1
2 1 1 0 1, 2
3 0 1 1 2, 3
Perhaps this helps
df[1:3] <- t(sapply(df$response, function(x) as.integer(names(df)[1:3] %in% names(df)[x])))
df
# a b c response
#1 1 0 0 1
#2 1 1 0 1, 2
#3 0 1 1 2, 3
Or a compact option is
library(qdapTools)
df[1:3] <- mtabulate(df$response)
I have a data.frame that looks like this:
A C G T
1 6 0 14 0
2 0 0 20 0
3 14 0 6 0
4 14 0 6 0
5 6 0 14 0
(actually, I have 1800 of the with varying numbers of rows..)
Just to explain what you are looking at:
Each row is one SNP, so it can either be one base (A,C,G,T) or another base (A,C,G,T)
SNP1’s Major allele is “G”, which appears in 14 individuals, the minor allele is “A”, which appears in 6 out of the 20 individuals in the dataset.
The 14 individuals that show G at SNP1 are the same the show A at SNP3, so there are two possibilities for the combination of bases along the 5 rows: one would be GGAAG and one would be AGGGA.
These can (theoretically) be built from the colnames of all the cells containing either 6 or 14 in the corresponding row, resulting in something like this:
A C G T 14 6
1 6 0 14 0 G A
2 0 0 20 0 G G
3 14 0 6 0 A G
4 14 0 6 0 A G
5 6 0 14 0 G A
Is there an elegant way to achieve something like this?
I have a piece of code from the answer to a somewhat related question that will return positions of a specific value within a matrix.
mat <- matrix(c(1:3), nrow = 4, ncol = 4)
[,1] [,2] [,3] [,4]
[1,] 1 2 3 1
[2,] 2 3 1 2
[3,] 3 1 2 3
[4,] 1 2 3 1
find <- function(mat, value) {
nr <- nrow(mat)
val_match <- which(mat == value)
out <- matrix(NA, nrow= length(val_match), ncol= 2)
out[,2] <- floor(val_match / nr) + 1
out[,1] <- val_match %% nr
return(out)
}
find(mat, 2)
[,1] [,2]
[1,] 2 1
[2,] 1 2
[3,] 0 3
[4,] 3 3
[5,] 2 4
I think I can figure out how to adjust this to where it returns the colname from the original data.frame, but it requires the value it is looking for as input. – There are potentially several of those in one data snippet (as seen in the example above, 14 and 6), and it is/they are different for each snippet of my data.
In some of them, there are no duplicates at all.
In addition, if one of the values hits 20, then the corresponding colname is automatically the one to choose (as seen in row 2 on the example above).
EDIT
I have tried the code suggested by thelatemail, and it works fine on some of the data, but not on all of them.
This one, for example, produces results that I don't fully understand:
subset looks like this:
A C G T
1 0 0 3 1
2 0 9 0 3
3 3 0 0 2
4 0 3 0 2
5 2 0 0 3
6 0 2 0 3
sel <- subset > 0
ord <- order(row(subset)[sel], -subset[sel])
haplo1 <- split(names(subset)[col(subset)[sel]][ord], row(subset)[sel][ord])
This produces
1
[1] "G" "T"
2
[1] "C" "T"
3
[1] "A" "T"
4
[1] "C" "T"
5
[1] "T" "A"
6
[1] "T" "C"
Since there is a 3 in every row, I don't understand why these are not all in one of these possibilities (which would result in GTACTT and TCTTAC instead).
I have also realized that I have a lot of missing alleles, were only one or two individuals were found to have a base in this locis.
Can a column with "missing" be included somehow? - I tried to just tack it on, which gave me an error about non-corresponding row numbers.
In order to get my minimum function to work, I had to covert zero's to NA. For some reason, na.rm=TRUE doesn't work with which.min
See if this is helpful for you:
A <- c(6,0,14,14,6)
C <- c(0,0,0,0,0)
G <- c(14,20,6,6,14)
T <- c(0,0,0,0,0)
mymatrix <- as.matrix(cbind(A,C,G,T))
mymatrix<-ifelse(mymatrix==0,mymatrix==NA,mymatrix)
mymatrix
major_allele <- colnames(mymatrix)[apply(mymatrix,1,which.max)] ; head(major_allele)
minor_allele <- colnames(mymatrix)[apply(mymatrix,1,which.min)] ; head(minor_allele)
myds<-as.data.frame(cbind(mymatrix,major_allele,minor_allele))
myds
> myds
A C G T major_allele minor_allele
1 6 <NA> 14 <NA> G A
2 <NA> <NA> 20 <NA> G G
3 14 <NA> 6 <NA> A G
4 14 <NA> 6 <NA> A G
5 6 <NA> 14 <NA> G A
Here's an attempt that will work for however many hits there are in each row. It returns a list object, which is probably appropriate for differing lengths of results per row.
sel <- dat > 0
ord <- order(row(dat)[sel], -dat[sel])
split(names(dat)[col(dat)[sel]][ord], row(dat)[sel][ord] )
#List of 5
# $ 1: chr [1:2] "G" "A"
# $ 2: chr "G"
# $ 3: chr [1:2] "A" "G"
# $ 4: chr [1:2] "A" "G"
# $ 5: chr [1:2] "G" "A"
Where dat was:
dat <- read.table(text="
A C G T
1 6 0 14 0
2 0 0 20 0
3 14 0 6 0
4 14 0 6 0
5 6 0 14 0
", header=TRUE)
I am exporting data from R with the command:
write.table(output,file = "data.raw", na "-9999", sep = "\t", row.names = FALSE, col.names = FALSE)
It exports my data correctly, but it exports all of the logical variables as TRUE and FALSE.
I need to read the data into another program that can only process numeric values. Is there an efficient way to convert logical columns to numeric 1s and 0s during the export? I have a large number of numeric variables, so I was hoping to automatically loop through all the variables in the data.table
Alternatively, my output object is a data.table. Is there an efficient way to convert all the logical variables in a data.table into numeric variables?
In case it is helpful, here is some code to generate a data.table with a logical variable in it (it is not a large number of logical variables, but enough to use on example code):
DT = data.table(cbind(1:100, rnorm(100) > 0)
DT[ , V3:= V2 == 1 ]
DT[ , V4:= V2 != 1 ]
For a data.frame, you could convert all logical columns to numeric with:
# The data
set.seed(144)
dat <- data.frame(V1=1:100,V2=rnorm(100)>0)
dat$V3 <- dat$V2 == 1
head(dat)
# V1 V2 V3
# 1 1 FALSE FALSE
# 2 2 TRUE TRUE
# 3 3 FALSE FALSE
# 4 4 FALSE FALSE
# 5 5 FALSE FALSE
# 6 6 TRUE TRUE
# Convert all to numeric
cols <- sapply(dat, is.logical)
dat[,cols] <- lapply(dat[,cols], as.numeric)
head(dat)
# V1 V2 V3
# 1 1 0 0
# 2 2 1 1
# 3 3 0 0
# 4 4 0 0
# 5 5 0 0
# 6 6 1 1
In data.table syntax:
# Data
set.seed(144)
DT = data.table(cbind(1:100,rnorm(100)>0))
DT[,V3 := V2 == 1]
DT[,V4 := FALSE]
head(DT)
# V1 V2 V3 V4
# 1: 1 0 FALSE FALSE
# 2: 2 1 TRUE FALSE
# 3: 3 0 FALSE FALSE
# 4: 4 0 FALSE FALSE
# 5: 5 0 FALSE FALSE
# 6: 6 1 TRUE FALSE
# Converting
(to.replace <- names(which(sapply(DT, is.logical))))
# [1] "V3" "V4"
for (var in to.replace) DT[, (var):= as.numeric(get(var))]
head(DT)
# V1 V2 V3 V4
# 1: 1 0 0 0
# 2: 2 1 1 0
# 3: 3 0 0 0
# 4: 4 0 0 0
# 5: 5 0 0 0
# 6: 6 1 1 0
Simplest way of doing this!
Multiply your matrix by 1
For example:
A <- matrix(c(TRUE,FALSE,TRUE,TRUE,TRUE,FALSE,FALSE,TRUE),ncol=4)
A
# [,1] [,2] [,3] [,4]
# [1,] TRUE TRUE TRUE FALSE
# [2,] FALSE TRUE FALSE TRUE
B <- 1*A
B
# [,1] [,2] [,3] [,4]
# [1,] 1 1 1 0
# [2,] 0 1 0 1
(You could also add zero: B <- 0 + A)
What about just a:
dat <- data.frame(le = letters[1:10], lo = rep(c(TRUE, FALSE), 5))
dat
le lo
1 a TRUE
2 b FALSE
3 c TRUE
4 d FALSE
5 e TRUE
6 f FALSE
7 g TRUE
8 h FALSE
9 i TRUE
10 j FALSE
dat$lo <- as.numeric(dat$lo)
dat
le lo
1 a 1
2 b 0
3 c 1
4 d 0
5 e 1
6 f 0
7 g 1
8 h 0
9 i 1
10 j 0
or another approach could be with dplyr in order to retain the previous column if the case (no one knows) your data will be imported in R.
library(dplyr)
dat <- dat %>% mutate(lon = as.numeric(lo))
dat
Source: local data frame [10 x 3]
le lo lon
1 a TRUE 1
2 b FALSE 0
3 c TRUE 1
4 d FALSE 0
5 e TRUE 1
6 f FALSE 0
7 g TRUE 1
8 h FALSE 0
9 i TRUE 1
10 j FALSE 0
Edit: Loop
I do not know if my code here is performing but it checks all column and change to numerical only those that are logical. Of course if your TRUE and FALSE are not logical but character strings (which might be remotely) my code won't work.
for(i in 1:ncol(dat)){
if(is.logical(dat[, i]) == TRUE) dat[, i] <- as.numeric(dat[, i])
}
If there are multiple columns, you could use set (using #josilber's example)
library(data.table)
Cols <- which(sapply(dat, is.logical))
setDT(dat)
for(j in Cols){
set(dat, i=NULL, j=j, value= as.numeric(dat[[j]]))
}
As Ted Harding pointed out in the R-help mailing list, one easy way to convert logical objects to numeric is to perform an arithmetic operation on them. Convenient ones would be * 1 and + 0, which will keep the TRUE/FALSE == 1/0 paradigm.
For your mock data (I've changed the code a bit to use regular R packages and to reduce size):
df <- data.frame(cbind(1:10, rnorm(10) > 0))
df$X3 <- df$X2 == 1
df$X4 <- df$X2 != 1
The dataset you get has a mixture of numeric and boolean variables:
X1 X2 X3 X4
1 1 0 FALSE TRUE
2 2 0 FALSE TRUE
3 3 1 TRUE FALSE
4 4 1 TRUE FALSE
5 5 1 TRUE FALSE
6 6 0 FALSE TRUE
7 7 0 FALSE TRUE
8 8 1 TRUE FALSE
9 9 0 FALSE TRUE
10 10 1 TRUE FALSE
Now let
df2 <- 1 * df
(If your dataset contains character or factor variables, you will need to apply this operation to a subset of df filtering out those variables)
df2 is equal to
X1 X2 X3 X4
1 1 0 0 1
2 2 0 0 1
3 3 1 1 0
4 4 1 1 0
5 5 1 1 0
6 6 0 0 1
7 7 0 0 1
8 8 1 1 0
9 9 0 0 1
10 10 1 1 0
Which is 100% numeric, as str(df2) will show you.
Now you can safely export df2 to your other program.
One line solution
Using the following code we take all the logical columns and make them numeric.
library(magrittr)
dat %<>% mutate_if(is.logical,as.numeric)
The same as #saebod but with usual pipe.
library(dplyr)
dat <- dat %>% mutate_if(is.logical, as.numeric)
I have a question which is similar to this one - Fast minimum distance (interval) between elements of 2 logical vectors (take 2) but it has some important differences.
Say I have a vector:
x <- c("A", "B", "C", "A", "D", "D", "A", "B", "A")
What I would like to do is:
For every element, calculate the minimum distance between it and the next element of each different type in the forward direction only. If for any element, no element of a particular type occurs in the forward direction then a 0 should be returned. The returned data will look like this:
Desired Output Table-
N x A B C D
1 A 3 1 2 4
2 B 2 6 1 3
3 C 1 5 0 2
4 A 3 4 0 1
5 D 2 3 0 1
6 D 1 2 0 0
7 A 2 1 0 0
8 B 1 0 0 0
9 A 0 0 0 0
The first column/var simply refers to the element order. The second col/var is the element at that position. Then there are four cols/vars - each one being a unique element that occurs in the vector.
The numbers in each of these four cols/vars are the minimum distance from that row's element to the next occurring element of each type in the FORWARD direction only. If a '0' is entered, that means that that element does not occur after that row's element in the vector.
How to achieve this?
My first thought was to try and mimic some aspects of the question above. To that end, I used a grepl function to turn the vector into four separate logical vectors indicating the presence/absence of each element.
xA<-grepl("A", x) # TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE TRUE
xB<-grepl("B", x) # FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE FALSE
xC<-grepl("B", x) # FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
xD<-grepl("D", x) # FALSE FALSE FALSE FALSE TRUE TRUE FALSE FALSE FALSE
I then tried the "Flodel" function and the second function provided by GG using library(data.table).
For example, to compute the minimum distances from all "As" to a "D":
flodel <- function(x, y) {
xw <- which(x)
yw <- which(y)
i <- findInterval(xw, yw, all.inside = TRUE)
pmin(abs(xw - yw[i]), abs(xw - yw[i+1L]), na.rm = TRUE)
}
flodel(xA,xD)
> [1] 4 1 1 3
#GG's data.table option
wxA <- data.table(x = which(xA))
wxD <- data.table(y = which(xD), key = "y")
wxD[wxA, abs(x - y), roll = "nearest"]
# y V1
#1: 1 4
#2: 4 1
#3: 7 1
#4: 9 3
Both of these options find the minimum distance for all A's to a D. However, it is in ANY direction, not the FORWARD direction only. GG's data.table option is on the surface more attractive to me as it returns data showing the position of each element (the 'y' column of the output) which would make it easy to package into a nice summary table (such as my desired output table above).
I have tried to work out alternative ways of using the 'roll' argument in data.table, but I don't seem to manage this issue.
Thanks for any suggestions.
Another way that seems valid:
levs = sort(unique(x))
do.call(rbind,
lapply(seq_along(x),
function(n)
match(levs, x[-seq_len(n)], 0)))
# [,1] [,2] [,3] [,4]
# [1,] 3 1 2 4
# [2,] 2 6 1 3
# [3,] 1 5 0 2
# [4,] 3 4 0 1
# [5,] 2 3 0 1
# [6,] 1 2 0 0
# [7,] 2 1 0 0
# [8,] 1 0 0 0
# [9,] 0 0 0 0
I'm not really sure how efficient this is, but it seems to work. How about
x <- c("A", "B", "C", "A", "D", "D", "A", "B", "A")
#find indexes for each value
locations<-split(seq_along(x), x)
#for each index, find the distance from the next highest
# index in the locations list
t(sapply(seq_along(x), function(i) sapply(locations, function(l)
if(length(z<-l[l>i])>0) z[1]-i else 0)))
This will return
A B C D
[1,] 3 1 2 4
[2,] 2 6 1 3
[3,] 1 5 0 2
[4,] 3 4 0 1
[5,] 2 3 0 1
[6,] 1 2 0 0
[7,] 2 1 0 0
[8,] 1 0 0 0
[9,] 0 0 0 0
I would like to ask,if some of You dont know any simple way to solve this kind of problem:
I need to generate all combinations of A numbers taken from a set B (0,1,2...B), with their sum = C.
ie if A=2, B=3, C=2:
Solution in this case:
(1,1);(0,2);(2,0)
So the vectors are length 2 (A), sum of all its items is 2 (C), possible values for each of vectors elements come from the set {0,1,2,3} (maximum is B).
A functional version since I already started before SO updated:
A=2
B=3
C=2
myfun <- function(a=A, b=B, c=C) {
out <- do.call(expand.grid, lapply(1:a, function(x) 0:b))
return(out[rowSums(out)==c,])
}
> out[rowSums(out)==c,]
Var1 Var2
3 2 0
6 1 1
9 0 2
z <- expand.grid(0:3,0:3)
z[rowSums(z)==2, ]
Var1 Var2
3 2 0
5 1 1
7 0 2
If you wanted to do the expand grid programmatically this would work:
z <- expand.grid( rep( list(C), A) )
You need to expand as a list so that the items remain separate. rep(0:3, 3) would not return 3 separate sequences. So for A=3:
> z <- expand.grid(rep(list(0:3), 3))
> z[rowSums(z)==2, ]
Var1 Var2 Var3
3 2 0 0
6 1 1 0
9 0 2 0
18 1 0 1
21 0 1 1
33 0 0 2
Using the nifty partitions() package, and more interesting values of A, B, and C:
library(partitions)
A <- 2
B <- 5
C <- 7
comps <- t(compositions(C, A))
ii <- apply(comps, 1, FUN=function(X) all(X %in% 0:B))
comps[ii, ]
# [,1] [,2]
# [1,] 5 2
# [2,] 4 3
# [3,] 3 4
# [4,] 2 5