How could i calculate the sparsity of a data.frame in R? - r

i have a data.frame structured like this:
A B C D E
F 1 0 7 0 0
G 0 0 0 1 1
H 1 1 0 0 0
I 1 2 1 0 0
L 1 0 0 0 0
and i want to calculate the sparsity(i.e. the percentage of 0 values) of this data.frame.
How could i do?

sum(df == 0)/(dim(df)[1]*dim(df)[2])
[1] 0.6

Related

Make a adjacency matrix in R

I want to make an adjacency matrix from a dataframe (mydata) consisting several rows with following rule:
List all letters as a square matrix
Count and sum number of connection from source from rest of columns (p1 p2 p3 p4 p5) of corresponding rows. For example, b is connected with a (2 and 8 rows) 5 times.
If letter is not included in source , connection values should be zero.
The dataframe is:
mydf <- data.frame(p1=c('a','a','a','b','g','b','c','c','d'),
p2=c('b','c','d','c','d','e','d','e','e'),
p3=c('a','a','c','c','d','d','d','a','a'),
p4=c('a','a','b','c','c','e','d','a','b'),
p5=c('a','b','c','d','I','b','b','c','z'),
source=c('a','b','c','d','e','e','a','b','d'))
The adjacency matrix should be as following
a b c d e g I z
a 4 2 1 3 0 0 0 0
b 5 1 3 0 1 0 0 0
c 1 1 2 1 0 0 0 0
d 1 2 3 2 1 0 0 1
e 0 2 1 3 2 1 1 0
g 0 0 0 0 0 0 0 0
I 0 0 0 0 0 0 0 0
z 0 0 0 0 0 0 0 0
I have hundreds of columns and thousands of rows. I would appreciate having any fastest way to do it in R
In base R, we can use table :
vals <- unlist(mydf[-ncol(mydf)])
table(factor(rep(mydf$source, ncol(mydf) - 1), levels = unique(vals)), vals)
# vals
# a b c d e g I z
# a 4 2 1 3 0 0 0 0
# b 5 1 3 0 1 0 0 0
# g 0 0 0 0 0 0 0 0
# c 1 1 2 1 0 0 0 0
# d 1 2 3 2 1 0 0 1
# e 0 2 1 3 2 1 1 0
# I 0 0 0 0 0 0 0 0
# z 0 0 0 0 0 0 0 0
In tidyverse we can do :
library(dplyr)
library(tidyr)
mydf %>%
pivot_longer(cols = -source) %>%
count(source, value) %>%
pivot_wider(names_from = value, values_from = n) %>%
complete(source = names(.)[-1]) %>%
mutate_all(~replace_na(., 0))

Automatic subsetting of a dataframe on the basis of a prediction matrix

I have created a prediction matrix for large dataset as follows:
library(mice)
dfpredm <- quickpred(df, mincor=.3)
A B C D E F G H I J
A 0 1 1 1 0 1 0 1 1 0
B 1 0 0 0 1 0 1 0 0 1
C 0 0 0 1 1 0 0 0 0 0
D 1 0 1 0 0 1 0 1 0 1
E 0 1 0 1 0 1 1 0 1 0
**F 0 0 1 0 0 0 1 0 0 0**
G 0 1 0 1 0 0 0 0 0 0
H 1 0 1 0 0 1 0 0 0 1
I 0 1 0 1 1 0 1 0 0 0
J 1 0 1 0 0 1 0 1 0 0
I would like to create a subset of the original df on the basis on dfpredm.
More specifically I would like to do the following:
Let's assume that my dependent variable is F.
According to the prediction matrix F is correlated with C and G.
In addition, C and G are best predicted by D,E and B,D respectively.
The idea is now to create a subset of df based on the dependent variable F,for which in the F row the value is 1.
Fpredictors <- df[,(dfpredm["F",]) == 1]
But also do the same for the variables where the rows in F are 1. I am thinking of first getting the column names like this:
Fpredcol <-colnames(dfpredm[,(dfpredm["c241",]) == 1])
And then doing a for loop with these column names?
For the specific example I would like to end up with the subset.
dfsub <- df[,c("F","C","G","B","E","D")]
I would however like to automate this process. Could anyone show me how to do this?
Here is one strategy that seems like it would work for you:
first_preds <- function(dat, predictor) {
cols <- which(dat[predictor, ] == 1)
names(dat)[cols]
}
# wrap first_preds() for getting best and second best predictors
first_and_second_preds <- function(dat, predictor) {
matches <- first_preds(dat, predictor)
matches <- c(matches, unlist(lapply(matches, function(x) first_preds(dat, x))))
c(predictor, matches) %>% unique()
}
dat[first_and_second_preds(dat, "F")] # order is not exactly the same as your output
F C G D E B
A 1 1 0 1 0 1
B 0 0 1 0 1 0
C 0 0 0 1 1 0
D 1 1 0 0 0 0
E 1 0 1 1 0 1
F 0 1 1 0 0 0
G 0 0 0 1 0 1
H 1 1 0 0 0 0
I 0 0 1 1 1 1
J 1 1 0 0 0 0
Not sure if the ordering in the result is important, but you could add the logic if it is.
Using dat from here (a kinder way to share small R data on SO):
dat <- read.table(
text = "A B C D E F G H I J
A 0 1 1 1 0 1 0 1 1 0
B 1 0 0 0 1 0 1 0 0 1
C 0 0 0 1 1 0 0 0 0 0
D 1 0 1 0 0 1 0 1 0 1
E 0 1 0 1 0 1 1 0 1 0
F 0 0 1 0 0 0 1 0 0 0
G 0 1 0 1 0 0 0 0 0 0
H 1 0 1 0 0 1 0 0 0 1
I 0 1 0 1 1 0 1 0 0 0
J 1 0 1 0 0 1 0 1 0 0",
header = TRUE
)
Something a little more general that would let you use self_select predictors directly:
all_preds <- function(dat, predictors) {
unlist(lapply(predictors, function(x) names(dat)[which(dat[x, ] == 1 )]))
}
dat[all_preds(dat, c("A", "B"))]
B C D F H I A E G J
A 1 1 1 1 1 1 0 0 0 0
B 0 0 0 0 0 0 1 1 1 1
C 0 0 1 0 0 0 0 1 0 0
D 0 1 0 1 1 0 1 0 0 1
E 1 0 1 1 0 1 0 0 1 0
F 0 1 0 0 0 0 0 0 1 0
G 1 0 1 0 0 0 0 0 0 0
H 0 1 0 1 0 0 1 0 0 1
I 1 0 1 0 0 0 0 1 1 0

R Frequency table with condition

I have a dataframe with two columns, "CaseID" and "Event" and want to know how often Event with ID=X is followed by Event with ID=Y. But I am only interested in consecutive events with the same CaseID.
The command
df <- data.frame(CaseID = c(1,1,1,2,2,2,3,3,3),
Event = c("A","B","C","A","B","D","B","C","E"))
df
table(df[1:nrow(df) -1, 2], df[2:nrow(df), 2])
results in
CaseID Event
1 1 A
2 1 B
3 1 C
4 2 A
5 2 B
6 2 D
7 3 B
8 3 C
9 3 E
A B C D E
A 0 2 0 0 0
B 0 0 2 1 0
C 1 0 0 0 1
D 0 1 0 0 0
E 0 0 0 0 0
C -> A and D -> B have different CaseID's and should be 0 so what I am looking for is
B C D E
A 2 0 0 0
B 0 2 1 0
C 0 0 0 1
D 0 0 0 0
E 0 0 0 0
Is there any elegant way to add a condition to the table-command, based on two consecutive rows?
Ben
We can only tabulate consecutive Events with the same CaseID:
> x <- diff(df$CaseID) == 0
> table(df[1:nrow(df) -1, 2][x], df[2:nrow(df), 2][x])
A B C D E
A 0 2 0 0 0
B 0 0 2 1 0
C 0 0 0 0 1
D 0 0 0 0 0
E 0 0 0 0 0
In case CaseID might be non-numeric:
x <- df$CaseID[-1] == df$CaseID[-length(df$CaseID)]
table(df[1:nrow(df) -1, 2][x], df[2:nrow(df), 2][x])

How to edit this matrix R?

I have a matrix named dx:
a b c d e f g h
cat 0 0 0 0 0 0 0 0
dog 1 0 1 0 0 0 0 1
fish 1 1 1 0 0 0 0 0
egg 0 0 0 0 0 0 0 0
How do I delete the rows that goes all zero across like cat and egg. So that I can end up with this only -
a b c d e f g h
dog 1 0 1 0 0 0 0 1
fish 1 1 1 0 0 0 0 0
You can try something like this:
m<-matrix(c(1,1,1,0,
0,0,0,0,
1,0,1,0,
0,0,0,0,
1,1,1,1),ncol=4,byrow=T)
m[rowSums(abs(m))!=0,]
zeros_removed = apply(dx, 1, function(row) all(row !=0 ))
dx[zeros_removed,]

Delete columns from a square matrix that sum to zero along with corresponding rows

I have a binary transition matrix. I want to delete rows associated with columns that sum to zero. For example, if
A B C D E
A 0 0 0 1 0
B 1 0 0 1 0
C 0 0 1 1 0
D 0 0 1 0 0
E 0 0 1 1 0
column B and E sum to zero. I know how to get rid of the columns like this,
> a.adj=a[,!!colSums(a)]
> a.adj
A C D
A 0 0 1
B 1 0 1
C 0 1 1
D 0 1 0
E 0 1 1
but how can I at the same time delete rows B and E to get
A C D
A 0 0 1
C 0 1 1
D 0 1 0
If the rownames and colnames are in the same order
indx <- !!colSums(a)
a[indx,indx]
# A C D
#A 0 0 1
#C 0 1 1
#D 0 1 0
Use names to select both columns and rows
> ind <- colnames(a[,!!colSums(a)])
> a[ind, ind]
A C D
A 0 0 1
C 0 1 1
D 0 1 0

Resources