R igraph Adjazenzmatrix weighted graph – plot is not weighted - r

I am trying to plot a weighed graph of terms used in tweets. Basically I made a term Document Matrix; removed sparse terms; build a adjazenzmatrix of the remaining words and would like to plot them.
I can't figure out where the problem is. Tried to do it exactly like on: http://www.rdatamining.com/examples/text-mining
Here's my code:
tweet_corpus = Corpus(VectorSource(df$CONTENT))
tdm = TermDocumentMatrix(
tweet_corpus,
control = list(
removePunctuation = TRUE,
stopwords = c("hehe", "haha", stopwords_phil, stopwords("english"), stopwords("spanish")),
removeNumbers = TRUE, tolower = TRUE)
)
m = as.matrix(tdm)
termDocMatrix <- m
termDocMatrix[5:10,1:20]
Docs
Terms 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
aabutin 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
aad 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
aaf 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
aali 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
aannacm 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
aantukin 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
myTdm2 <- removeSparseTerms(tdm, sparse =0.98)
m2 <- as.matrix(myTdm2)
m2[5:10,1:20]
Docs
Terms 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
filipino 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
give 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0
god 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
good 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
guy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0
haiyan 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
myTdm2
<<TermDocumentMatrix (terms: 34, documents: 27395)>>
Non-/sparse entries: 39769/891661
Sparsity : 96%
Maximal term length: 9
Weighting : term frequency (tf)
termDocMatrix2 <- m2
termDocMatrix2[termDocMatrix2>=1] <- 1
termMatrix2 <- termDocMatrix2 %*% t(termDocMatrix2)
termMatrix2[5:10,5:10]
Terms
Terms disaster give god good guy test
disaster 623 6 53 11 4 19
give 6 592 98 16 8 6
god 53 98 2679 135 38 29
good 11 16 135 816 21 5
guy 4 8 38 21 637 5
test 19 6 29 5 5 610
g2 <- graph.adjacency(termMatrix2, weighted=T, mode="undirected")
g2 <- simplify(g2)
V(g)$label <- V(g)$name
V(g2)$label <- V(g2)$name
V(g2)$degree <- degree(g2)
set.seed(3952)
layout1 <- layout.fruchterman.reingold(g2)
plot(g2, layout=layout1)
plot(g2, layout=layout.kamada.kawai)
V(g2)$label.cex <- 2.2 * V(g2)$degree / max(V(g2)$degree)+ .2
V(g2)$label.color <- rgb(0, 0, .2, .8)
V(g2)$frame.color <- NA
egam <- (log(E(g2)$weight)+.4) / max(log(E(g2)$weight)+.4)
E(g2)$color <- rgb(.5, .5, 0, egam)
E(g2)$width <- egam
plot(g2, layout=layout1)
This then looks like:
but i would like to have something like this:
apparently the weighing doesn't work - but why?!
Thank you guys in advance!

Even though your graph is weighted, the layout algorithm does not use the weights unless you explicitly tell it to do so. Try this:
layout1 <- layout.fruchterman.reingold(g2, weights=E(g2)$weight)
However, if your weights are wildly varying in terms of magnitude, it is usually better to use the logarithm of the weights (plus some constant to make all of them strictly positive) as the input of the layout algorithm.

Related

Multiplying multiple columns with each other into a new dataframe in R

I want to multiply many of my binary variables into new columns, so called interactive variables. My dataset is structured like this:
YearCountry <- data.frame( Time = c("2000","2001", "2002", "2003",
"2000","2001", "2002", "2003",
"2000","2001", "2002", "2003"),
AL = c(1,1,1,1,0,0,0,0,0,0,0,0),
FR = c(0,0,0,0,1,1,1,1,0,0,0,0),
UK = c(0,0,0,0,0,0,0,0,1,1,1,1),
Y2000d = c(1,0,0,0,1,0,0,0,1,0,0,0),
Y2001d = c(0,1,0,0,0,1,0,0,0,1,0,0),
Y2002d = c(0,0,1,0,0,0,1,0,0,0,1,0),
Y2003d = c(0,0,0,1,0,0,0,1,0,0,0,1))
YearCountry
Time AL FR UK Y2000d Y2001d Y2002d Y2003d
1 2000 1 0 0 1 0 0 0
2 2001 1 0 0 0 1 0 0
3 2002 1 0 0 0 0 1 0
4 2003 1 0 0 0 0 0 1
5 2000 0 1 0 1 0 0 0
6 2001 0 1 0 0 1 0 0
7 2002 0 1 0 0 0 1 0
8 2003 0 1 0 0 0 0 1
9 2000 0 0 1 1 0 0 0
10 2001 0 0 1 0 1 0 0
11 2002 0 0 1 0 0 1 0
12 2003 0 0 1 0 0 0 1
I need to multiply the binary variable for each of the countries (AL,FR,UK) with each of the binary variables for a given year so that I get #country x #year new variables. In this case I have three countries and four years which gives 12 new variables. My full data contains 105 countries/regions and stretches over twenty years. I therefore need a general formula. I want data that looks like this
Interact <- data.frame(Time = c("2000","2001", "2002", "2003",
"2000","2001", "2002", "2003",
"2000","2001", "2002", "2003"),
Y2000xAL = c(1,0,0,0,0,0,0,0,0,0,0,0),
Y2001xAL = c(0,1,0,0,0,0,0,0,0,0,0,0),
Y2002xAL = c(0,0,1,0,0,0,0,0,0,0,0,0),
Y2003xAL = c(0,0,0,1,0,0,0,0,0,0,0,0),
Y2000xFR = c(0,0,0,0,1,0,0,0,0,0,0,0),
Y2001xFR = c(0,0,0,0,0,1,0,0,0,0,0,0),
Y2002xFR = c(0,0,0,0,0,0,1,0,0,0,0,0),
Y2003xFR = c(0,0,0,0,0,0,0,1,0,0,0,0),
Y2000xUk = c(0,0,0,0,0,0,0,0,1,0,0,0),
Y2001xUK = c(0,0,0,0,0,0,0,0,0,1,0,0),
Y2002xUK = c(0,0,0,0,0,0,0,0,0,0,1,0),
Y2003xUK = c(0,0,0,0,0,0,0,0,0,0,0,1))
Interact
Time Y2000xAL Y2001xAL Y2002xAL Y2003xAL Y2000xFR Y2001xFR Y2002xFR Y2003xFR Y2000xUk Y2001xUK Y2002xUK Y2003xUK
1 2000 1 0 0 0 0 0 0 0 0 0 0 0
2 2001 0 1 0 0 0 0 0 0 0 0 0 0
3 2002 0 0 1 0 0 0 0 0 0 0 0 0
4 2003 0 0 0 1 0 0 0 0 0 0 0 0
5 2000 0 0 0 0 1 0 0 0 0 0 0 0
6 2001 0 0 0 0 0 1 0 0 0 0 0 0
7 2002 0 0 0 0 0 0 1 0 0 0 0 0
8 2003 0 0 0 0 0 0 0 1 0 0 0 0
9 2000 0 0 0 0 0 0 0 0 1 0 0 0
10 2001 0 0 0 0 0 0 0 0 0 1 0 0
11 2002 0 0 0 0 0 0 0 0 0 0 1 0
12 2003 0 0 0 0 0 0 0 0 0 0 0 1
Here's an approach with dplyr::across. We can make the final result into a plain data.frame with purrr:invoke as demonstrated in this answer.
library(dplyr)
library(purrr)
YearCountry %>%
mutate(across(AL:UK, ~ . * select(cur_data(), Y2000d:Y2003d))) %>%
select(-(Y2000d:Y2003d)) %>%
invoke(.f = data.frame) %>%
rename_with(~str_replace(.,"\\.",""))
Time ALY2000d ALY2001d ALY2002d ALY2003d FRY2000d FRY2001d FRY2002d FRY2003d UKY2000d UKY2001d UKY2002d UKY2003d
1 2000 1 0 0 0 0 0 0 0 0 0 0 0
2 2001 0 1 0 0 0 0 0 0 0 0 0 0
3 2002 0 0 1 0 0 0 0 0 0 0 0 0
4 2003 0 0 0 1 0 0 0 0 0 0 0 0
5 2000 0 0 0 0 1 0 0 0 0 0 0 0
6 2001 0 0 0 0 0 1 0 0 0 0 0 0
7 2002 0 0 0 0 0 0 1 0 0 0 0 0
8 2003 0 0 0 0 0 0 0 1 0 0 0 0
9 2000 0 0 0 0 0 0 0 0 1 0 0 0
10 2001 0 0 0 0 0 0 0 0 0 1 0 0
11 2002 0 0 0 0 0 0 0 0 0 0 1 0
12 2003 0 0 0 0 0 0 0 0 0 0 0 1
1) model.matrix We split the names by the number of characters in them (the countries have 2 characters in their names and the years have 6) and paste pluses in each. (Alternately use Plus(grep("^..$", nms, value = TRUE)) to get the country names and use that in place of spl["2"] and similarly Plus(grep("^Y....d$", nms, value = TRUE)) in place of spl["6"].)
c(`2` = "AL+FR+UK", `6` = "Y2000d+Y2001d+Y2002d+Y2003d")
and from that the formula:
~(AL + FR + UK):(Y2000d + Y2001d + Y2002d + Y2003d) + 0
and then compute its model matrix.
The formula could also be expanded to one accepted by lm by modifying the sprintf format so we might not even need to create the model matrix. For example, if we had a response vector R then we could write: s <- sprintf("R ~ (%s)*(%s)", spl["2"], spl["4"]); fo <- formula(s); lm(fo, YearCountry) to include all variables and the interactions of countries and year as well as an intercept.
Plus <- function(x) paste(x, collapse = "+")
nms <- names(YearCountry)[-1]
spl <- sapply(split(nms, nchar(nms)), Plus)
s <- sprintf("~ (%s):(%s)+0", spl["2"], spl["6"])
fo <- formula(s)
model.matrix(fo, YearCountry)
giving this matrix:
AL:Y2000d AL:Y2001d AL:Y2002d AL:Y2003d FR:Y2000d FR:Y2001d FR:Y2002d FR:Y2003d UK:Y2000d UK:Y2001d UK:Y2002d UK:Y2003d
1 1 0 0 0 0 0 0 0 0 0 0 0
2 0 1 0 0 0 0 0 0 0 0 0 0
3 0 0 1 0 0 0 0 0 0 0 0 0
4 0 0 0 1 0 0 0 0 0 0 0 0
5 0 0 0 0 1 0 0 0 0 0 0 0
6 0 0 0 0 0 1 0 0 0 0 0 0
7 0 0 0 0 0 0 1 0 0 0 0 0
8 0 0 0 0 0 0 0 1 0 0 0 0
9 0 0 0 0 0 0 0 0 1 0 0 0
10 0 0 0 0 0 0 0 0 0 1 0 0
11 0 0 0 0 0 0 0 0 0 0 1 0
12 0 0 0 0 0 0 0 0 0 0 0 1
attr(,"assign")
[1] 1 2 3 4 5 6 7 8 9 10 11 12
Alternately we can write it compactly like this:
Plus <- function(x) paste(x, collapse = "+")
nms <- names(YearCountry)
s <- sprintf("~ (%s):(%s)+0", Plus(nms[2:4]), Plus(nms[5:8]))
fo <- formula(s)
model.matrix(fo, YearCountry)
2) eList Another approach is to use list comprehensions. With the eList package we can do this:
library(eList)
DF(for(i in YearCountry[2:4]) for(j in YearCountry[5:8]) i*j)
giving this data frame. Use as.matrix(...) on it if you want a matrix.
AL.Y2000d AL.Y2001d AL.Y2002d AL.Y2003d FR.Y2000d FR.Y2001d FR.Y2002d FR.Y2003d UK.Y2000d UK.Y2001d UK.Y2002d UK.Y2003d
1 1 0 0 0 0 0 0 0 0 0 0 0
2 0 1 0 0 0 0 0 0 0 0 0 0
3 0 0 1 0 0 0 0 0 0 0 0 0
4 0 0 0 1 0 0 0 0 0 0 0 0
5 0 0 0 0 1 0 0 0 0 0 0 0
6 0 0 0 0 0 1 0 0 0 0 0 0
7 0 0 0 0 0 0 1 0 0 0 0 0
8 0 0 0 0 0 0 0 1 0 0 0 0
9 0 0 0 0 0 0 0 0 1 0 0 0
10 0 0 0 0 0 0 0 0 0 1 0 0
11 0 0 0 0 0 0 0 0 0 0 1 0
12 0 0 0 0 0 0 0 0 0 0 0 1
3) listcompr listcompr is another list comprehension package. Note that the development version of this package is needed in order to use bycol=. Replace gen.named.matrix with gen.named.data.frame if you want a data frame.
# devtools::github_github("patrickroocks/listcompr")
library(listcompr)
nms <- names(YearCountry)
gen.named.matrix("{nms[i]}.{nms[j]}", YearCountry[[i]] * YearCountry[[j]],
i = 2:4, j = 5:8, bycol = TRUE)

Adding multiple columns in between columns in a data frame using a For Loop

outputdata (df)
Store.No Task
1 70
2 50
3 20
I am trying to add 53 columns after the 'Task' column by using its position not the name. Then I want want columns names to begin from 1 and end on the number 53 with 0 in the rows. The rows in this example go to row number 3 but it could vary so would it be possible to use nrow function to specify the number of rows rather than hard coding
outputdata- Desired Outcome
Store.No Task 1 2 3 4 5 6 7 8 9 10 ...53
1 70 0 0 0 0 0 0 0 0 0 0
2 50 0 0 0 0 0 0 0 0 0 0
3 20 0 0 0 0 0 0 0 0 0 0
Code used
x <- 1
y <- 0
for (i in 1:53){
outputdata <- add_column(outputdata, x = 0, .after = Fo+y)
y <- y + 1
x <- x + 1
}
The error i'm getting is the columns are being called x,x.1,x.2,x.3,x.4...x.53. Rather than 1,2,3,4...53...not too sure why this could be
I am still quite new to R so there is a far more efficient way of doing this then please let me know
Many thanks
You do not need to loop to do this:
as.data.frame(cbind(df, matrix(0, nrow = nrow(df), ncol = 53)))
Store.No Task Third Fourth 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
1 1 70 4 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 2 50 5 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 20 6 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
matrix will create a matrix with 53 columns and 3 rows filled with 0
cbind will add this matrix to the end of your data
as.data.frame will convert it to a dataframe
Update
To insert these zero columns positionally you can subset your df into two parts: df[, 1:2] are the first and second columns, while df[,3:ncol(df)] are the third to end of your dataframe.
as.data.frame(cbind(df[,1:2], matrix(0, nrow = nrow(df), ncol = 53), df[,3:ncol(df)))
Store.No Task 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
1 1 70 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 2 50 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 Third Fourth
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 7
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 8
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 9
add_column
Alternatively you can use the add_column function from the tibble package as you were in your post using the .after argument to insert after the second column:
library(tibble)
tibble::add_column(df, as.data.frame(matrix(0, nrow = nrow(df), ncol = 53)), .after = 2)
Note: this function will fix the column names to add a "V" before any column name that starts with a number. So 1 will become V1.
Data
df <- data.frame(Store.No = 1:3,
Task = c(70, 50, 20),
Third = 4:6,
Fourth = 7:9)

EDITED: spreading data based on column match

I have an empty data frame I am trying to populate.
Df1 looks like this:
col1 col2 col3 col4 important_col
1 82 193 104 86 120
2 85 68 116 63 100
3 78 145 10 132 28
4 121 158 103 15 109
5 48 175 168 190 151
6 91 136 156 180 155
Df2 looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
A data frame full of 0's.
I combine the data frames to make df_fin.
What I am trying to do now is something similar to a dummy variable approach… I have the column in important_col. What I am trying to do is spread this column out, so if important_col = 28 then put a 1 in column 28.
How can I go about creating this?
EDIT: I added a comment to illustrate what I am trying to achieve. I paste it here also.
Say that the important_col is countries, then the column names would
be all the countries in the world. That is in this example all of the
241 countries in the world. However the data I might have already
collected might only contain 200 of these countires. So
one_hot_encoding here would give me 200 columns but I am missing
potentially 41 countries. So if a new user from a country (not
currently in the data) comes to the data and inputs their country,
then it wouldn´t be recognised
Smaller example:
col1 col2 col3 col4 important_col 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 11 14 3 11 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 19 15 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 17 10 10 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 13 10 8 17 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 18 5 3 18 19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 11 10 9 5 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 5 11 18 16 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 5 8 13 8 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 10 1 7 16 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 4 17 17 3 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Expected output:
col1 col2 col3 col4 important_col 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 11 14 3 11 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 19 15 4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 3 17 10 10 6 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 13 10 8 17 10 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
5 18 5 3 18 19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
6 11 10 9 5 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
7 5 11 18 16 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
8 5 8 13 8 6 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 10 1 7 16 12 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
10 4 17 17 3 4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
The number of columns is greater than the number of potential entries into important_col. Using the countries example the columns would be all countries in the world and the important_col would consist of a subset of these countries.
Code to generate the above:
df1 <- data.frame(replicate(5, sample(1:20, 10, rep=TRUE)))
colnames(df1) <- c("col1", "col2", "col3", "col4", "important_col")
df2 <- data.frame(replicate(20, sample(0:0, nrow(df1), rep=TRUE)))
colnames(df2) <- gsub("X", "", colnames(df2))
df_fin <- cbind(df1, df2)
df_fin
Does this solve the problem:
Data:
set.seed(123)
df1 <- data.frame(replicate(5, sample(1:20, 10, rep=TRUE)))
colnames(df1) <- c("col1", "col2", "col3", "col4", "important_col")
df2 <- data.frame(replicate(20, sample(0:0, nrow(df1), rep=TRUE)))
colnames(df2) <- gsub("X", "", colnames(df2))
df_fin <- cbind(df1, df2)
Result:
vecp <- colnames(df2)
imp_col <- df1$important_col
m <- matrix(vecp, byrow = TRUE, nrow = length(imp_col), ncol = length(vecp))
d <- ifelse(m == imp_col, 1, 0)
df_fin <- cbind(df1, d)
Output:
col1 col2 col3 col4 important_col 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 6 20 18 20 3 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 16 10 14 19 9 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
3 9 14 13 14 9 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
4 18 12 20 16 8 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
5 19 3 14 1 4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 1 18 15 10 3 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 11 5 11 16 5 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 18 1 12 5 10 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
9 12 7 6 7 6 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 10 20 3 5 18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
What you are trying to do is one hot encoding which you can easily achieve using model.matrix
Below example should take you to the right direction:
df <- data.frame(important_col = as.factor(c(1:3)))
df
important_col
1 1
2 2
3 3
as.data.frame(model.matrix(~.-1, df))
important_col1 important_col2 important_col3
1 1 0 0
2 0 1 0
3 0 0 1
Like Sonny mentioned, model.matrix() should do the job. One potential problem is that you have to add back columns that did not show up in your important_col like the following case:
df <- data.frame(important_col = as.factor(c(1:3, 5)))
df
important_col
1 1
2 2
3 3
4 5
as.data.frame(model.matrix(~.-1, df))
important_col1 important_col2 important_col3 important_col5
1 1 0 0 0
2 0 1 0 0
3 0 0 1 0
4 0 0 0 1
Col4 is missing in the second df, because the important_col does not include value 4. You have to add back the col 4 if you need it for analysis.

Turn a long data structure to a wide matrix structure

I do have the following data structure...
ID value
1 1 1
2 1 63
3 1 2
4 1 58
5 2 3
6 2 4
7 3 34
8 3 25
Now I want to turn it into a kind of dyadic data structure. Every ID with the same value should have a relationship.
I tried several option and:
df_wide <- dcast(df, ID ~ value)
... have brought me a long way down the road...
ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 39 40
1 1001 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1006 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1007 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 2 0 0
4 1011 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1018 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 1020 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
7 1030 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0
8 1036 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Now is my main problem to turn it into a proper matrix to get a igraph object out of it.
df_wide_matrix <- data.matrix(df_wide)
df_aus_wide_g <- graph.edgelist(df_wide_matrix ,directed = TRUE)
don't get me there...
I also tried to transform it into a adjacency matrix...
df_wide_matrix <- get.adjacency(graph.edgelist(as.matrix(df_wide), directed=FALSE))
... but it didn't work either
If you want to create an edge between all IDs with the same value, try something like this instead. First merge the data frame onto itself by the value. Then, remove the value column, and remove all (undirected) edges that are duplicate or just points. Finally, convert to a two-column matrix and create the edges.
res <- merge(df, df, by='value', all=FALSE)[,c('ID.x','ID.y')]
res <- res[res$ID.x<res$ID.y,]
resg <- graph.edgelist(as.matrix(res))

Loosing observation when I use reshape in R

I have data set
> head(pain_subset2, n= 50)
PatientID RSE SE SECODE
1 1001-01 0 0 0
2 1001-01 0 0 0
3 1001-02 0 0 0
4 1001-02 0 0 0
5 1002-01 0 0 0
6 1002-01 1 2a 1
7 1002-02 0 0 0
8 1002-02 0 0 0
9 1002-02 0 0 0
10 1002-03 0 0 0
11 1002-03 0 0 0
12 1002-03 1 1 1
> dim(pain_subset2)
[1] 817 4
> table(pain_subset2$RSE)
0 1
788 29
> table(pain_subset2$SE)
0 1 2a 2b 3 4 5
788 7 5 1 6 4 6
> table(pain_subset2$SECODE)
0 1
788 29
I want to create matrix with n * 6 (n :# of PatientID, column :6 levels of SE)
I use reshape, I lost many observations
> dim(p)
[1] 246 9
My code:
p <- reshape(pain_subset2, timevar = "SE", idvar = c("PatientID","RSE"),v.names = "SECODE", direction = "wide")
p[is.na(p)] <- 0
> table(p$RSE)
0 1
226 20
Compare with table of RSE, I lost 9 patients having 1.
This is out put I have
PatientID RSE SECODE.0 SECODE.2a SECODE.1 SECODE.5 SECODE.3 SECODE.2b SECODE.4
1 1001-01 0 0 0 0 0 0 0 0
3 1001-02 0 0 0 0 0 0 0 0
5 1002-01 0 0 0 0 0 0 0 0
6 1002-01 1 0 1 0 0 0 0 0
7 1002-02 0 0 0 0 0 0 0 0
10 1002-03 0 0 0 0 0 0 0 0
12 1002-03 1 0 0 1 0 0 0 0
13 1002-04 0 0 0 0 0 0 0 0
15 1003-01 0 0 0 0 0 0 0 0
18 1003-02 0 0 0 0 0 0 0 0
21 1003-03 0 0 0 0 0 0 0 0
24 1003-04 0 0 0 0 0 0 0 0
27 1003-05 0 0 0 0 0 0 0 0
30 1003-06 0 0 0 0 0 0 0 0
32 1003-07 0 0 0 0 0 0 0 0
35 1004-01 0 0 0 0 0 0 0 0
36 1004-01 1 0 0 0 1 0 0 0
40 1004-02a 0 0 0 0 0 0 0 0
Anyone knows what happens, I really appreciate.
Thanks for your help, best.
Try:
library(dplyr)
library(tidyr)
pain_subset2 %>%
spread(SE, SECODE)

Resources