subseting columns by the name of rows of another dataframe - r

I need to subset the columns of a dataframe taking into account the rownames of another dataframe.(in R)
Im trying to select the representative species of Brazilian Amazon subseting a great Brazilian database taking into account the percentage of representative location, information which is in another dataframe
> a <- data.frame("John" = c(2,1,1,2), "Dora" = c(1,1,3,2), "camilo" = c(1:4),"alex"=c(1,2,1,2))
> a
John Dora camilo alex
1 2 1 1 1
2 1 1 2 2
3 1 3 3 1
4 2 2 4 2
> b <- data.frame("SN" = 1:3, "Age" = c(15,31,2), "Name" = c("John","Dora","alex"))
> b
SN Age Name
1 1 15 John
2 2 31 Dora
3 3 2 alex
> result <- a[,rownames(b)[1:3]]
Error in `[.data.frame`(a, , rownames(b)[1:3]) :
undefined columns selected
I want to get this dataframe
John Dora alex
1 2 1 1
2 1 1 2
3 1 3 1
4 2 2 2

The simple a[,b$Name] does not work because b$Name is considered a factor. Be careful because it won't throw an error but you will get the wrong answer!
But this is easy to fit by using a[,as.character(b$Name)]instead!

Related

Table of Raw Data to Adjacency Matrix/Sociomatrix

I have a data table arranged like so:
ID Category 1 Category 2 Category 3
Name 1 Example 1 Example 2 Example 3
Name 2 Example 1 Example 2 Example 4
Name 3 Example 5 Example 6 Example 4
.... .... .... .....
I'm trying to turn it into a table like this:
Name 1 Name 2 Name 3 ....
Name 1 0 2 0
Name 2 2 0 1
Name 3 0 1 0
....
Where each cell in the output table represents how many of the categories were the same when compared between IDs. This could also be how many of the categories were different, either one will work. I've looked into adjacency matrices and sociomatrices on stack overflow, as well as some of the matrix matching recommendations, but I don't think that my data table is set up properly. Does anyone have any recommendations on how this should be done?
EDIT: Ah, apologies. I'm using R as my program. Left that bit out
You can do this by putting your data into a long format first, at which point it becomes a pretty straightforward exercise:
# your data
tdf <- data.frame(ID = paste0("Name ", 1:3), cat1 = paste0("Example ", c(1,1,5)),
cat2 = paste0("Example ", c(2,2,6)),
cat3= paste0("Example ", c(3,4,4)))
tdf
#> ID cat1 cat2 cat3
#> 1 Name 1 Example 1 Example 2 Example 3
#> 2 Name 2 Example 1 Example 2 Example 4
#> 3 Name 3 Example 5 Example 6 Example 4
# the categories are extraneous, what matters is the relationship of ID to
# the Example values, so we melt the df to long format using the
# melt function from the package reshape2
lfd <- reshape2::melt(tdf, id.vars = "ID")
#> Warning: attributes are not identical across measure variables; they will
#> be dropped
# create an affiliation matrix
adj1 <- as.matrix(table(lfd$ID, lfd$value))
adj1
#>
#> Example 1 Example 2 Example 3 Example 4 Example 5 Example 6
#> Name 1 1 1 1 0 0 0
#> Name 2 1 1 0 1 0 0
#> Name 3 0 0 0 1 1 1
# Adjacency matrix is simply the product
id_id_adj_mat <- adj1 %*% t(adj1)
# Set the diagonal to zero (currently diagonal displays degree of each node)
diag(id_id_adj_mat) <- 0
id_id_adj_mat
#>
#> Name 1 Name 2 Name 3
#> Name 1 0 2 0
#> Name 2 2 0 1
#> Name 3 0 1 0

How to keep User ID using Rtsne package

I want to use T-SNE to visualize user's variable but I want to be able to join the data to the user's social information.
Unfortunately, the output of Rtsne doesn't seems to return data with the user id..
The data looks like this:
client_id recency frequen monetary
1 2 1 1 1
2 3 3 1 2
3 4 1 1 2
4 5 3 1 1
5 6 4 1 2
6 7 5 1 1
and the Rtsne output:
x y
1 -6.415009 -0.4726438
2 -16.027732 -9.3751709
3 17.947615 0.2561859
4 1.589996 13.8016613
5 -9.332319 -13.2144419
6 10.545698 8.2165265
and the code:
tsne = Rtsne(rfm[, -1], dims=2, check_duplicates=F)
Rtsne preserves the input order of the dataframe you pass to it.
Try:
Tsne_with_ID = cbind.data.frame(rfm[,1],tsne$y)
and then just fix the first column name:
colnames(Tsne_with_ID)[1] <- paste(colnames(rfm)[1])

Count rows matching index in another data frame

I have two data frames.
The first describes a set of households:
#df1
street house etc
1 1 ...
1 2 ...
2 1 ...
2 2 ...
2 3 ...
3 1 ...
The second describes the individuals who live in those houses
#df2
street house person etc
1 1 1 ...
1 1 2 ...
1 2 1 ...
1 2 2 ...
1 2 3 ...
3 1 1 ...
I would like to add a new column to df1 called "member_count" and populate this column with the number of rows in df2 matching both "street" and "house". What is the most readable way of accomplishing this with base R?
tmpdf <- data.frame(table(df2$street, df2$house))
names(tmpdf) <- c("street", "house", "member_count")
df1 <- merge(df1, tmpdf, by = c("street", "house"), all.x = TRUE)
In base R, perhaps the easiest way is
df1$membercount <- mapply(function(s,h) nrow(df2[df2$street==s & df2$house==h,]),
df1$street,df1$house)
df1
street house membercount
1 1 1 2
2 1 2 3
3 2 1 0
4 2 2 0
5 2 3 0
6 3 1 1

Conditionally dropping duplicates from a data.frame

Im am trying to figure out how to subset my dataset according to the repeated value of the variable s, taking also into account the id associated to the row.
Suppose my dataset is:
dat <- read.table(text = "
id s
1 2
1 2
1 1
1 3
1 3
1 3
2 3
2 3
3 2
3 2",
header=TRUE)
What I would like to do is, for each id, to keep only the first row for which s = 3. The result with dat would be:
id s
1 2
1 2
1 1
1 3
2 3
3 2
3 2
I have tried to use both duplicated() and which() for using subset() in a second moment, but I am not going anywhere. The main problem is that it is not sufficient to isolate the first row of the s = 3 "blocks", because in some cases (as here between id = 1 and id = 2) the 3's overlap between one id and another.. Which strategy would you adopt?
Like this:
subset(dat, s != 3 | s == 3 & !duplicated(dat))
# id s
# 1 1 2
# 2 1 2
# 3 1 1
# 4 1 3
# 7 2 3
# 9 3 2
# 10 3 2
Note that subset can be dangerous to work with (see Why is `[` better than `subset`?), so the longer but safer version would be:
dat[dat$s != 3 | dat$s == 3 & !duplicated(dat), ]

Calculating the occurrences of numbers in the subsets of a data.frame

I have a data frame in R which is similar to the follows. Actually my real ’df’ dataframe is much bigger than this one here but I really do not want to confuse anybody so that is why I try to simplify things as much as possible.
So here’s the data frame.
id <-c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3)
a <-c(3,1,3,3,1,3,3,3,3,1,3,2,1,2,1,3,3,2,1,1,1,3,1,3,3,3,2,1,1,3)
b <-c(3,2,1,1,1,1,1,1,1,1,1,2,1,3,2,1,1,1,2,1,3,1,2,2,1,3,3,2,3,2)
c <-c(1,3,2,3,2,1,2,3,3,2,2,3,1,2,3,3,3,1,1,2,3,3,1,2,2,3,2,2,3,2)
d <-c(3,3,3,1,3,2,2,1,2,3,2,2,2,1,3,1,2,2,3,2,3,2,3,2,1,1,1,1,1,2)
e <-c(2,3,1,2,1,2,3,3,1,1,2,1,1,3,3,2,1,1,3,3,2,2,3,3,3,2,3,2,1,3)
df <-data.frame(id,a,b,c,d,e)
df
Basically what I would like to do is to get the occurrences of numbers for each column (a,b,c,d,e) and for each id group (1,2,3) (for this latter grouping see my column ’id’).
So, for column ’a’ and for id number ’1’ (for the latter see column ’id’) the code would be something like this:
as.numeric(table(df[1:10,2]))
##The results are:
[1] 3 7
Just to briefly explain my results: in column ’a’ (and regarding only those records which have number ’1’ in column ’id’) we can say that number '1' occured 3 times and number '3' occured 7 times.
Again, just to show you another example. For column ’a’ and for id number ’2’ (for the latter grouping see again column ’id’):
as.numeric(table(df[11:20,2]))
##After running the codes the results are:
[1] 4 3 3
Let me explain a little again: in column ’a’ and regarding only those observations which have number ’2’ in column ’id’) we can say that number '1' occured 4 times, number '2' occured 3 times and number '3' occured 3 times.
So this is what I would like to do. Calculating the occurrences of numbers for each custom-defined subsets (and then collecting these values into a data frame). I know it is not a difficult task but the PROBLEM is that I’m gonna have to change the input ’df’ dataframe on a regular basis and hence both the overall number of rows and columns might change over time…
What I have done so far is that I have separated the ’df’ dataframe by columns, like this:
for (z in (2:ncol(df))) assign(paste("df",z,sep="."),df[,z])
So df.2 will refer to df$a, df.3 will equal df$b, df.4 will equal df$c etc. But I’m really stuck now and I don’t know how to move forward…
Is there a proper, ”automatic” way to solve this problem?
How about -
> library(reshape)
> dftab <- table(melt(df,'id'))
> dftab
, , value = 1
variable
id a b c d e
1 3 8 2 2 4
2 4 6 3 2 4
3 4 2 1 5 1
, , value = 2
variable
id a b c d e
1 0 1 4 3 3
2 3 3 3 6 2
3 1 4 5 3 4
, , value = 3
variable
id a b c d e
1 7 1 4 5 3
2 3 1 4 2 4
3 5 4 4 2 5
So to get the number of '3's in column 'a' and group '1'
you could just do
> dftab[3,'a',1]
[1] 4
A combination of tapply and apply can create the data you want:
tapply(df$id,df$id,function(x) apply(df[id==x,-1],2,table))
However, when a grouping doesn't have all the elements in it, as in 1a, the result will be a list for that id group rather than a nice table (matrix).
$`1`
$`1`$a
1 3
3 7
$`1`$b
1 2 3
8 1 1
$`1`$c
1 2 3
2 4 4
$`1`$d
1 2 3
2 3 5
$`1`$e
1 2 3
4 3 3
$`2`
a b c d e
1 4 6 3 2 4
2 3 3 3 6 2
3 3 1 4 2 4
$`3`
a b c d e
1 4 2 1 5 1
2 1 4 5 3 4
3 5 4 4 2 5
I'm sure someone will have a more elegant solution than this, but you can cobble it together with a simple function and dlply from the plyr package.
ColTables <- function(df) {
counts <- list()
for(a in names(df)[names(df) != "id"]) {
counts[[a]] <- table(df[a])
}
return(counts)
}
results <- dlply(df, "id", ColTables)
This gets you back a list - the first "layer" of the list will be the id variable; the second the table results for each column for that id variable. For example:
> results[['2']]['a']
$a
1 2 3
4 3 3
For id variable = 2, column = a, per your above example.
A way to do it is using the aggregate function, but you have to add a column to your dataframe
> df$freq <- 0
> aggregate(freq~a+id,df,length)
a id freq
1 1 1 3
2 3 1 7
3 1 2 4
4 2 2 3
5 3 2 3
6 1 3 4
7 2 3 1
8 3 3 5
Of course you can write a function to do it, so it's easier to do it frequently, and you don't have to add a column to your actual data frame
> frequency <- function(df,groups) {
+ relevant <- df[,groups]
+ relevant$freq <- 0
+ aggregate(freq~.,relevant,length)
+ }
> frequency(df,c("b","id"))
b id freq
1 1 1 8
2 2 1 1
3 3 1 1
4 1 2 6
5 2 2 3
6 3 2 1
7 1 3 2
8 2 3 4
9 3 3 4
You didn't say how you'd like the data. The by function might give you the output you like.
by(df, df$id, function(x) lapply(x[,-1], table))

Resources