Related
I'm trying to build a clustering model following the kmeans method using both continous and categoric variables.
The goal is to create clusters based on the gender, age, occupation, billing plan, cell phone and the usage on some applications
I'm struggeling on how to process the categorical data, I know i should turn them onto dummies but not quite sure on how to do it in all the categoric variables all at once.
Thank you
The table looks like:
ID Gender Age Occupation Plan Cell phone Amazon Prime GB DL Apple Music GB DL Audible DB DL
C001 NR 56 Student Archaius SAMSUNG 0 0 0.498829165
C002 M 25 Management Malawi HUAWEI 0 0 1
C003 H 32 Professor Archaius Apple 0 0 0.632005841
One possible solution to create dummy variables from categorical variables is the "fastDummies" Package:
library(fastDummies)
df <- data.frame(NR1 = c(1,2,3),
NR2 = c(0.1, 0.5, 0.7),
FA1 = factor(c("A","B","C")),
FA2 = factor(c("5","6","7")))
str(df)
'data.frame': 3 obs. of 4 variables:
$ NR1: num 1 2 3
$ NR2: num 0.1 0.5 0.7
$ FA1: Factor w/ 3 levels "A","B","C": 1 2 3
$ FA2: Factor w/ 3 levels "5","6","7": 1 2 3
# one variable per factor level
fastDummies::dummy_cols(df)
NR1 NR2 FA1 FA2 FA1_A FA1_B FA1_C FA2_5 FA2_6 FA2_7
1 1 0.1 A 5 1 0 0 1 0 0
2 2 0.5 B 6 0 1 0 0 1 0
3 3 0.7 C 7 0 0 1 0 0 1
# encoding where where there are n-1 columns per factor (as in case of all being 0 it implies the last is 1 already)
fastDummies::dummy_cols(df, remove_first_dummy = TRUE)
NR1 NR2 FA1 FA2 FA1_B FA1_C FA2_6 FA2_7
1 1 0.1 A 5 0 0 0 0
2 2 0.5 B 6 1 0 1 0
3 3 0.7 C 7 0 1 0 1
I’m in the process of cleaning some data for a survival analysis and I am trying to make it so that an individual only has a single, sustained, transition from symptom present (ss=1) to symptom remitted (ss=0). An individual must have a complete sustained remission in order for it to count as a remission. Statistical problems/issues aside, I’m wondering how I can go about addressing the issues detailed below.
I’ve been trying to break the problem apart into smaller, more manageable operations and objects, however, the solutions I keep coming to force me to use conditional formatting based on rows immediately above and below the a missing value and, quite frankly, I’m at a bit of a loss as to how to do this. I would love a little guidance if you think you know of a good technique I can use, experiment with, or if you know of any good search terms I can use when looking up a solution.
The details are below:
#Fake dataset creation
id <- c(1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4)
time <-c(0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6)
ss <- c(1,1,1,1,NA,0,0,1,1,0,NA,0,0,0,1,1,1,1,1,1,NA,1,1,0,NA,NA,0,0)
mydat <- data.frame(id, time, ss)
*Bold and underlined characters represent changes from the dataset above
The goal here is to find a way to get the NA values for ID #1 (variable ss) to look like this: 1,1,1,1,1,0,0
ID# 2 (variable ss) to look like this: 1,1,0,0,0,0,0
ID #3 (variable ss) to look like this: 1,1,1,1,1,1,NA (no change because the row with NA will be deleted eventually)
ID #4 (variable ss) to look like this: 1,1,1,1,1,0,0 (this one requires multiple changes and I expect it is the most challenging to tackle).
I don't really think you have considered all the "edge case". What to do with two NA's in a row at the end of a period or 4 or 5 NA's in a row. This will give you the requested solution in your tiny test case, however, using the na.locf-function:
require(zoo)
fillNA <- function(vec) { if ( is.na(tail(vec, 1)) ){ vec } else { vec <- na.locf(vec) }
}
> mydat$locf <- with(mydat, ave(ss, id, FUN=fillNA))
> mydat
id time ss locf
1 1 0 1 1
2 1 1 1 1
3 1 2 1 1
4 1 3 1 1
5 1 4 NA 1
6 1 5 0 0
7 1 6 0 0
8 2 0 1 1
9 2 1 1 1
10 2 2 0 0
11 2 3 NA 0
12 2 4 0 0
13 2 5 0 0
14 2 6 0 0
15 3 0 1 1
16 3 1 1 1
17 3 2 1 1
18 3 3 1 1
19 3 4 1 1
20 3 5 1 1
21 3 6 NA NA
22 4 0 1 1
23 4 1 1 1
24 4 2 0 0
25 4 3 NA 0
26 4 4 NA 0
27 4 5 0 0
28 4 6 0 0
I'm using r to analyze an undirected network of individuals with ethnicities as attributes. I want to create a tie accounts table, or "preference matrix," a square matrix where values of ethnicity are arrayed on both dimensions, and each cell tells you how many ties correspond to that type of relationship. (so from this you can calculate the probability of one group throwing ties to another group - but I just want to use it as an argument in igraph's preference.game function). here's what I tried:
# I create a variable for ethnicity by assigning the names of my vertices to their corresponding ethnicities
eth <- atts$Ethnicity[match(V(mahmudNet)$name,atts$Actor)]
# I create an adjacency matrix from my network data
mat <- as.matrix(get.adjacency(mahmudNet))
# I create the dimensions for my preference matrix from the Ethnicity values
eth.value <- unique(sort(eth))
# I create an empty matrix using these dimensions
eth.mat <- array(NA,dim=c(length(eth.value),length(eth.value)))
# I create a function that will populate the empty cells of the matrix
for (i in eth.value){
for (j in eth.value){
eth.mat[i,j] <- sum(mat[eth==i,eth==j])
}
}
My problem is at the end, I think. I need to figure out an expression that tells R how to populate the cells. the expression I put doesn't seem to work, but I want it so that potentially I could go
a <- sum(mat[eth=="White", eth=="Black"])
And then "a" would return the sum of all the cells in the adjacency matrix that correspond to a White-Black relationship.
Here's a sample of my data:
# data frame with Ethnicity attributes:
Actor Ethnicity
1 Sultan Mahmud of Siak 2
2 Daeng Kemboja 1
3 Raja Kecik of Trengganu 1
4 Raja Alam 2
5 Tun Dalam 2
6 Raja Haji 1
7 The Suliwatang 1
8 Punggawa Miskin 1
9 Tengku Selangor 1
10 Tengku Raja Said 1
11 Datuk Bendahara 2
12 VOC 3
13 King of Selangor 1
14 Dutch at Batavia 3
15 Punggawa Tua 2
16 Raja Tua Encik Andak 1
17 Raja Indera Bungsu 2
18 Sultan of Jambi 2
19 David Boelen 3
20 Datuk Temenggong 2
21 Punggawa Opu Nasti 1
# adjacency matrix with relations
Daeng Kemboja Punggawa Opu Nasti Raja Haji Daeng Cellak
Daeng Kemboja 0 1 1 1
Punggawa Opu Nasti 1 0 1 0
Raja Haji 1 1 0 0
Daeng Cellak 1 0 0 0
Daeng Kecik 1 0 0 0
Daeng Kecik
Daeng Kemboja 1
Punggawa Opu Nasti 0
Raja Haji 0
Daeng Cellak 0
Daeng Kecik 0
This is a simple job for table, once you have your data in the right shape.
First a sample dataset:
# fake ethnicity data by actor
actor_eth <- data.frame(actor = letters[1:10],
eth = sample(1:3, 10, replace=T))
# fake adjacency matrix
adj_mat <- matrix(rbinom(100, 1, .5), ncol=10)
dimnames(adj_mat) <- list(letters[1:10], letters[1:10])
# blank out lower triangle & diagonal,
# so random data is not asymetric & no self-ties
adj_mat[lower.tri(adj_mat)] <- NA
diag(adj_mat) <- NA
Here's our fake adjacency matrix:
a b c d e f g h i j
a NA 1 1 1 0 0 1 1 0 1
b NA NA 0 1 0 1 0 0 1 0
c NA NA NA 1 1 0 0 1 0 0
d NA NA NA NA 1 0 0 1 1 0
e NA NA NA NA NA 0 0 1 0 1
f NA NA NA NA NA NA 1 1 0 1
g NA NA NA NA NA NA NA 1 1 0
h NA NA NA NA NA NA NA NA 0 0
i NA NA NA NA NA NA NA NA NA 1
j NA NA NA NA NA NA NA NA NA NA
Here's our fake eth table:
actor eth
1 a 3
2 b 3
3 c 3
4 d 2
5 e 1
6 f 3
7 g 3
8 h 3
9 i 1
10 j 2
So what you want to do is 1) put this in long format, so you have a bunch of rows with a source actor and a target actor, each representing a tie. Then 2) replace the actor name with ethnicity, so you have ties with source/target ethnicity. Then 3) you can just use table to make a cross tab.
# use `melt` to put this in long form, omitting rows showing "non connections"
library(reshape2)
actor_ties <- subset(melt(adj_mat), value==1)
# now replace the actor names with their ethnicities to get create a data.frame
# of ties by ethnicty
eth_ties <-
data.frame(source_eth = with(actor_eth, eth[match(actor_ties$Var1, actor)]),
target_eth = with(actor_eth, eth[match(actor_ties$Var2, actor)]))
# now here's your cross tab
table(eth_ties)
Result:
target_eth
source_eth 1 2 3
1 0 2 1
2 2 0 1
3 3 5 9
I have a data frame that I'm working with that contains experimental data. For the purposes of this post we can limit the discussion to 3 columns: ExperimentID, ROI, isContrast, isTreated, and, Value. ROI is a text-based factor that indicates where a region-of-interest is drawn, e.g. 'ROI_1', 'ROI_2',...etc. isTreated and isContrast are binary fields indicating whether or not some treatment was applied. I want to make a scatter plot comparing the values of, e.g., 'ROI_1' vs. 'ROI_2 ', which means I need the data paired in such a way that when I plot it the first X value is from Experiment_1 and ROI_1, the first Y value is from Experiment_1 and ROI_2, the next X value is from Experiment_2 and ROI_1, the next Y value is from Experiment_2 and ROI_2, etc. I only want to make this comparison for common values of isContrast and isTreated (i.e. 1 plot for each combination of these variables, so 4 plots altogether.
Subsetting doesn't solve my problem because data from different experiments/ROIs was sometimes entered out of numerical order.
The following code produces a mock data set to demonstrate the problem
expID = c('Bob','Bob','Bob','Bob','Lisa','Lisa','Lisa','Lisa','Alice','Alice','Alice','Alice','Joe','Joe','Joe','Joe','Bob','Bob','Alice','Alice','Lisa','Lisa')
treated = c(0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1,0,0,0,0,0,0)
contrast = c(0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1)
val = c(1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4,6,7,8,9,10,11)
roi = c(rep('A',16),'B','B','B','B','B','B')
myFrame = data.frame(ExperimentID=expID,isTreated = treated, isContrast= contrast,Value = val, ROI=roi)
ExperimentID isTreated isContrast Value ROI
1 Bob 0 0 1 A
2 Bob 0 1 2 A
3 Bob 1 0 3 A
4 Bob 1 1 4 A
5 Lisa 0 0 1 A
6 Lisa 0 1 2 A
7 Lisa 1 0 3 A
8 Lisa 1 1 4 A
9 Alice 0 0 1 A
10 Alice 0 1 2 A
11 Alice 1 0 3 A
12 Alice 1 1 4 A
13 Joe 0 0 1 A
14 Joe 0 1 2 A
15 Joe 1 0 3 A
16 Joe 1 1 4 A
17 Bob 0 0 6 B
18 Bob 0 1 7 B
19 Alice 0 0 8 B
20 Alice 0 1 9 B
21 Lisa 0 0 10 B
22 Lisa 0 1 11 B
Now let's say I want to scatter plot values for A vs. B. That is to say, I want to plot x vs. y where {(x,y)} = {(Bob's Value from ROI A, Bob's Value from ROI B), (Alice's Value from ROI A, Alices Value from ROI B)},...} etc. and these all must have the same values for isTreated and isContrast for the comparison to make sense. Now, if I just go an subset I'll get something like:
> x= myFrame$Value[(myFrame$ROI == 'A') & (myFrame$isTreated == 0) & (myFrame$isContrast == 0)]
> x
[1] 1 1 1 1
> y= myFrame$Value[(myFrame$ROI == 'B') & (myFrame$isTreated == 0) & (myFrame$isContrast == 0)]
> y
[1] 6 8 10
Now as you can see the values in y correspond to the first rows of Bob, Lisa, Alice and Joe, respectively but the values of y Bob, Alice and Lisa respectively, and there is no value for Joe.
So say I ignored the value for Joe because that data is missing for B and just decided to plot the first 3 values of x vs. the first 3 values of y. The data are still out of order because x = (Bob, Lisa, Alice) but y = (Bob, Alice, Lisa) in terms of where the values are coming from. So I would like to now how to make vectors such that the order is correct and the plot makes sense.
Similar to #Matthew, with ggplot:
The idea is to reshape your data so the the values from ROI=A and RIO=B are in different columns. This can be done (with your sample data) as follows:
library(reshape2)
zz <- dcast(myFrame,
value.var="Value",
formula=ExperimentID+isTreated+isContrast~ROI)
zz
ExperimentID isTreated isContrast A B
1 Alice 0 0 1 8
2 Alice 0 1 2 9
3 Alice 1 0 3 NA
4 Alice 1 1 4 NA
5 Bob 0 0 1 6
6 Bob 0 1 2 7
7 Bob 1 0 3 NA
8 Bob 1 1 4 NA
9 Joe 0 0 1 NA
10 Joe 0 1 2 NA
11 Joe 1 0 3 NA
12 Joe 1 1 4 NA
13 Lisa 0 0 1 10
14 Lisa 0 1 2 11
15 Lisa 1 0 3 NA
16 Lisa 1 1 4 NA
Notiice that your sample data is rather sparse (lots of NA's).
To plot:
library(ggplot2)
ggplot(zz,aes(x=A,y=B,color=factor(isTreated))) +
geom_point(size=4)+facet_wrap(~isContrast)
Produces this:
The reason there are no blue points is that, in your sample data, there are no occurrences of isTreated=1 and ROI=B.
Something like this, perhaps:
myFrameReshaped <- reshape(myFrame, timevar='ROI', direction='wide', idvar=c('ExperimentID','isTreated','isContrast'))
plot(Value.B ~ Value.A, data=myFrameReshaped)
To condition by the isTreated and isContrast variables, lattice comes in handy:
library(lattice)
xyplot(Value.B~Value.A | isTreated + isContrast, data=myFrameReshaped)
Values that are not present for one of the conditions give NA, and are not plotted.
head(myFrameReshaped)
## ExperimentID isTreated isContrast Value.A Value.B
## 1 Bob 0 0 1 6
## 2 Bob 0 1 2 7
## 3 Bob 1 0 3 NA
## 4 Bob 1 1 4 NA
## 5 Lisa 0 0 1 10
## 6 Lisa 0 1 2 11
I have a variable that is a factor :
$ year : Factor w/ 8 levels "2003","2004",..: 4 6 4 2 4 1 3 3 7 2 ...
I would like to create 8 dummy variables, named "2003", "2004" etc that take the value 0 or 1 depending on the value that the variable "year" takes. The nearest I could come up with is
dt1 <- cbind (dt1, model.matrix(~dt1$year - 1) )
But this has the unfortunate consequences of
The dummy variables are named dt1$year2003, not just "2003", "2004" etc
It seems that NA rows are omitted altogether by model.matrix (so the above command fails due to different lengths when NA is present in the year variable).
Of course I can get around these problems with more code, but I like my code to be as concise as possible (within reason) so if anyone can suggest better ways to make the dummy variables I would be obliged.
This is as concise as I could get. The na.action option takes care of the NA values (I would rather do this with an argument than with a global options setting, but I can't see how). The naming of columns is pretty deeply hard-coded, don't see any way to override it within model.matrix ...
options(na.action=na.pass)
dt1 <- data.frame(year=factor(c(NA,2003:2005)))
dt2 <- setNames(cbind(dt1,model.matrix(~year-1,data=dt1)),
c("year",levels(dt1$year)))
As pointed out above, you may run into trouble in some contexts with column names that are not legal R variable names.
year 2003 2004 2005
1 <NA> NA NA NA
2 2003 1 0 0
3 2004 0 1 0
4 2005 0 0 1
You could use ifelse() which won't omit na rows (but I guess you might not count it as being "as concise as possible"):
dt1 <- data.frame(year=factor(rep(2003:2010, 10))) # example data
dt1 <- within(dt1, yr2003<-ifelse(year=="2003", 1, 0))
dt1 <- within(dt1, yr2004<-ifelse(year=="2004", 1, 0))
dt1 <- within(dt1, yr2005<-ifelse(year=="2005", 1, 0))
# ...
head(dt1)
# year yr2003 yr2004 yr2005
# 1 2003 1 0 0
# 2 2004 0 1 0
# 3 2005 0 0 1
# 4 2006 0 0 0
# 5 2007 0 0 0
# 6 2008 0 0 0
library(caret) provides a very simple function (dummyVars) to create dummy variables, especially when you have more than one factor variables. But you have to make sure the target variables are factor. e.g. if your Sales$year are numeric, you have to convert them to factor: as.factor(Sales$year)
Suppose we have the original dataset 'Sales' as follows:
year Sales Region
1 2010 3695.543 North
2 2010 9873.037 West
3 2008 3579.458 West
4 2005 2788.857 North
5 2005 2952.183 North
6 2008 7255.337 West
7 2005 5237.081 West
8 2010 8987.096 North
9 2008 5545.343 North
10 2008 1809.446 West
Now we can create two dummy variables simultaneously:
>library(lattice)
>library(ggplot2)
>library(caret)
>Salesdummy <- dummyVars(~., data = Sales, levelsOnly = TRUE)
>Sdummy <- predict(Salesdummy, Sales)
The outcome will be:
2005 2008 2010 Sales RegionNorth RegionWest
1 0 0 1 3695.543 1 0
2 0 0 1 9873.037 0 1
3 0 1 0 3579.458 0 1
4 1 0 0 2788.857 1 0
5 1 0 0 2952.183 1 0
6 0 1 0 7255.337 0 1
7 1 0 0 5237.081 0 1
8 0 0 1 8987.096 1 0
9 0 1 0 5545.343 1 0
10 0 1 0 1809.446 0 1