How can I manually enter a value in R? - r

I am trying to enter a date of birth value on a dataset of senators. I rarely touch for loops so I may be doing this wrong but this is what I have so far:
ID <- c("A000055","B001303","M001201")
D.O.B. <- c("1965-07-22",NA,"1951-11-07")
leg_complete <- data.frame("name","D.O.B.")
for(id in leg_complete)
if(ID=="M001201") {
D.O.B. <- "1956-11-14"
} else {
break
}
Whenever I run the code and open the dataset, I get "No data available in the table. Is the for loop even the best move for entering a single value or should I use a different function?

Welcome to SO, #fdob!
I think you have a couple of issues in the code you posted above.
First, I think you meant to use ID instead of name in constructing your dataframe?
Second, you don't need the quotes around each variable either?
So I think you were trying to do this instead?
leg_complete <- data.frame(ID,D.O.B.)
If that is a correct assumption, then if you wanted to reassign a value in the dataframe, you can do something as simple as the below
leg_complete2[ID=="M001201"] <- "1956-11-14"
which gives the following:
> str(leg_complete)
'data.frame': 3 obs. of 2 variables:
$ ID : Factor w/ 3 levels "A000055","B001303",..: 1 2 3
$ D.O.B.: Factor w/ 2 levels "1951-11-07","1965-07-22": 2 NA 1

Related

R: NAs show up when writing to a data frame, occurs inside second condition of if/else statement

I'm trying to run some code in R that reads data from one data frame and writes it to a new data frame depending on conditionals about what text occurs in the first "read" data frame.
For some reason when I write to a new data frame I get "NA" values for the characters in the final column, but this only happens with the second if statement when I try to write the text "Byeeeeee", "HiThere" comes up fine.
Any insight into why this is working for the first if statement, but not the second?
input file is "testWayang", output file is "outputWayang"
Thanks!!!
outputWayang<-data.frame(startId=numeric(), studId=numeric(), sessNum=numeric(), startTime=numeric(), problemId=numeric(), action=character())
for(i in 1:50){
if(testWayang[i,4]=="beginProblem"){
outputWayang = rbind(outputWayang, c(testWayang[i,1], testWayang[i,2], testWayang[i,3], testWayang[i,7], testWayang[i+1,9], as.character("HiThere")))
}
else if(testWayang[i,4]=="studentFeedback"){
outputWayang = rbind(outputWayang, c(testWayang[i,1], testWayang[i,2], testWayang[i,3], testWayang[i,7], testWayang[i,9], as.character("Byeeeeeee")))
}
write.csv(outputWayang, file="place I write my file to on my computer", append=TRUE)
}
You are attempting to create a vector with the c function that has both numeric and character values. That will fail you. It's also the case that R newbies often fail to recognize problems related to R's default creation of factor variables. You might be able to rbind a list to an existing dataframe. Let's try with a simple example:
rbind( data.frame(a=1,b="y"), c(2,"n"))
a b
1 1 y
2 2 <NA>
Warning message:
In `[<-.factor`(`*tmp*`, ri, value = "n") :
invalid factor level, NA generated
So that replicates your failure; It's also the case taht you inadvertently coerced the first colum of what you might expect to be numeri class to character class:
> str( rbind( data.frame(a=1,b="y"), c(2,"n")) )
'data.frame': 2 obs. of 2 variables:
$ a: chr "1" "2"
$ b: Factor w/ 1 level "y": 1 NA
Just using a list also fails because of the factor issue mentioned above, but if you create the dataframe with a factor that has all possible levels then you get success:
rbind( data.frame(a=1,b=factor("y",levels=c("y","n"))), list(2,"n"))
a b
1 1 y
2 2 n
I'n not sure what else might be going on since your didn't include enough data to do further investigation.

Loading and expanding contingency table

I have a data file that represents a contingency table that I need to work with. The problem is I can't figure out how to load it properly.
Data structure:
Rows: individual churches
1st Column: Name of the church
2nd - 12th column: Mean age of followers
Every cell: Number of people who follows corresponding church and are correspondingly old.
//In the original data set only the age range was available (e.g. between 60-69) so to enable computation with it I decided to replace it with mean age (e. g. 64.5 instead of 60-69)
Data sample:
name;7;15;25
catholic;25000;30000;15000
hinduism;5000;2000;3000
...
I tried to simply load the data and make them a 'table' so I could expand it but it didn't work (only produced something really weird).
dataset <- read.table("C:/.../dataset.csv", sep=";", quote="\"")
dataset_table <- as.table(as.matrix(dataset))
When I tried use the data as they were to produce a simple graph it didn't work either.
barplot(dataset[2,2:4])
Error in barplot.default(dataset[2,2:4]) : 'height' must be a vector or a matrix
Classing dataset[2,2:4] showed me that it is a 'list' which I don't understand (I guess it is because dataset is data.frame and not table).
If someone could point me into the right direction how to properly load the data as a table and then work with them, I'd be forever grateful :).
If your file is already a contingency table, don't use as.table().
df <- read.table(header=T,sep=";",text="name;7;15;25
catholic;25000;30000;15000
hinduism;5000;2000;3000")
colnames(df)[-1] <- substring(colnames(df)[-1],2)
barplot(as.matrix(df[2,2:4]), col="lightblue")
The transformation of colnames(...) is because R doesn't like column names that start with a number, so it prepends X. This codes just gets rid of that.
EDIT (Response to OP's comment)
If you want to convert the df defined above to a table suitable for use with expand.table(...) you have to set dimnames(...) and names(dimnames(...)) as described in the documentation for expand.table(...).
tab <- as.matrix(df[-1])
dimnames(tab) <- list(df$name,colnames(df)[-1])
names(dimnames(tab)) <- c("name","age")
library(epitools)
x.tab <- expand.table(tab)
str(x.tab)
# 'data.frame': 80000 obs. of 2 variables:
# $ name: Factor w/ 2 levels "catholic","hinduism": 1 1 1 1 1 1 1 1 1 1 ...
# $ age : Factor w/ 3 levels "7","15","25": 1 1 1 1 1 1 1 1 1 1 ...

How to delete columns in a dataset that have binomial variables

The dataset named data has both categorical and continuous variables. I would like to the delete categorical variables.
I tried:
data.1 <- data[,colnames(data)[[3L]]!=0]
No error is printed, but categorical variables stay in data.1. Where are problems ?
The summary of "head(data)" is
id 1,2,3,4,...
age 45,32,54,23,...
status 0,1,0,0,...
...
(more variables like as I wrote above)
All variables are defined as "Factor".
What are you trying to do with that code? First of all, colnames(data) is not a list so using [[]] doesn't make sense. Second, The only thing you test is whether the third column name is not equal to zero. As a column name can never start with a number, that's pretty much always true. So your code translates to :
data1 <- data[,TRUE]
Not what you intend to do.
I suppose you know the meaning of binomial. One way of doing that is defining your own function is.binomial() like this :
is.binomial <- function(x,na.action=c('na.omit','na.fail','na.pass'){
FUN <- match.fun(match.arg(na.action))
length(unique(FUN(x)))==2
}
in case you want to take care of NA's. This you can then apply to your dataframe :
data.1 <- data[!sapply(data,is.binomial)]
This way you drop all binomial columns, i.e. columns with only two distinct values.
#Shimpei Morimoto,
I think you need a different approach.
Are the categorical variables defines in the dataframe as factors?
If so you can use:
data.1 <- data[,!apply(data,2,is.factor)]
The test you perform now is if the colname number 3L is not 0.
I think this is not the case.
Another approach is
data.1 <- data[,-3L]
works only if 3L is a number and the only column with categorical variables
I think you're getting there, with your last comment to #Mischa Vreeburg. It might make sense (as you suggest) to reformat your original data file, but you should also be able to solve the problem within R. I can't quite replicate the undefined columns error you got.
Construct some data that look as much like your data as possible:
X <- read.csv(textConnection(
"id,age,pre.treat,status
1,'27', 0,0
2,'35', 1,0
3,'22', 0,1
4,'24', 1,2
5,'55', 1,3
, ,yes(vs)no,"),
quote="\"'")
Take a look:
str(X)
'data.frame': 6 obs. of 4 variables:
$ id : int 1 2 3 4 5 NA
$ age : int 27 35 22 24 55 NA
$ pre.treat: Factor w/ 3 levels " 0"," 1","yes(vs)no": 1 2 1 2 2 3
$ status : int 0 0 1 2 3 NA
Define #Joris Mey's function:
is.binomial <- function(x,na.action=c('na.omit','na.fail','na.pass')) {
FUN <- match.fun(match.arg(na.action))
length(unique(FUN(x)))==2
}
Try it out: you'll see that it does not detect pre.treat as binomial, and keeps all the variables.
sapply(X,is.binomial)
X1 <- X[!sapply(X,is.binomial)]
names(X1)
## keeps everything
We can drop the last row and try again:
X2 <- X[-nrow(X),]
sapply(X2,is.binomial)
It is true in general that R does not expect "extraneous" information such as level IDs to be in the same column as the data themselves. On the one hand, you can do even better in the R world by simply leaving the data as their original, meaningful values ("no", "yes", or "healthy", "sick" rather than 0, 1); on the other hand the data take up slightly more space if stored as a text file, and, more important, it becomes harder to incorporate other meta-data such as units in the file along with the data ...

data imported as class "null" - unable to perform statistics, unable to change class

I am having rather lengthy problems concerning my data set and I believe that my trouble trace back to importing the data. I have looked at many other questions and answers as well as as many help sites as I can find, but I can't seem to make anything work. I am attemping to run some TTests on my data and have thus far been unable to do so. I believe the root cause is the data is imported as class NULL. I've tried to include as much information here as I can to show what I am working with and the types of issues I am having (in case the issue is in some other area)
An overview of my data and what i've been doing so far is this:
Example File data (as displayed in R after reading data from .csv file):
Part Q001 Q002 LA003 Q004 SA005 D106
1 5 3 text 99 text 3
2 3 text 2 text 2
3 2 4 3 text 5
4 99 5 text 2 2
5 4 2 1 text 3
So in my data, the "answers" are 1 through 5. 99 represents a question that was answered N/A. blanks represent unanswered questions. the 'text' questions are long and short answer/comments from a survey. All of them are stored in a large data set over over 150 Participants (Part) and over 300 questions (labled either Q, LA, SA, or D based on question with a 1-5 answer, long answer, short answer, or demographic (also numeric answers 0 thought 6 or so)).
When I import the data, I need to have it disregard any blank or 99 answers so they do not interfere with statistics. I also don't care about the comments, so I filter all of them out.
EDIT: data file looks like:
Part,Q001,Q002,LA003,Q004,SA005,D006
1,5,3,text,99,text,3
2,3,,text,2,text,2
etc...
I am using the following lines to read the data:
data.all <- read.table("data.csv", header=TRUE, sep=",", na.strings = c("","99"))
data <- data.all[, !(colnames(data.all) %in% c("LA003", "SA005")
now, when I type
class(data$Q001)
I get NULL
I need these to be Numeric. I can use summary(data) to get the means and such, but when I try to run ttests, I get errors including NULL.
I tried to turn this column into numerics by using
data<-sapply(data,as.numeric)
and I tried
data[,1]<-as.numeric(as.character(data[,1]))
(and with 2 instead of 1, but I don't really understand the sapply syntax, I saw it in several other answers and was trying to make it work)
when I then type
class(data$Q001)
I get "Error: $ operator is invalid for atomic vectors
If I do not try to use sapply, and I try to run a ttest, I've created subsets such as
data.2<-subset(data, D106 == "2")
data.3<-subset(data, D106 == "3")
and I use
t.test(data.2$Q001~data.3$Q001, na.rm=TRUE)
and I get "invalid type (NULL) for variable 'data.2$Q001'
I tried using the different syntax, trying to see if I can get anything to work, and
t.test(data.2$Q001, data.3$Q001, na.rm=TRUE)
gives "In is.na(d) : is.na() applied to non-(list or vector) of type 'NULL'" and "In mean.default(x) : argument is not numeric or logical: returning NA"
So, now that I think I've been clear about what I'm trying to do and some of the things I've tried...
How can I import my data so that numbers (specifically any number in a column with a header starting with Q) are accurately read as numbers and do not get a NULL class applied to them? What do I need to do in order to get my data properly imported to run TTests on it? I've used TTests on plenty of data in the past, but it has always been data I recorded manually in excel (and thus had only one column of numbers with no blanks or NAs) and I've never had an issue, and I just do not understand what it is about this data set that I can't get it to work. Any assistance in the right direction is much appreciated!
This works for me:
> z <- read.table(textConnection("Part,Q001,Q002,LA003,Q004,SA005,D006
+ 1,5,3,text,99,text,3
+ 2,3,,text,2,text,2
+ "),header=TRUE,sep=",",na.strings=c("","99"))
> str(z)
'data.frame': 2 obs. of 7 variables:
$ Part : int 1 2
$ Q001 : int 5 3
$ Q002 : int 3 NA
$ LA003: Factor w/ 1 level "text": 1 1
$ Q004 : int NA 2
$ SA005: Factor w/ 1 level "text": 1 1
$ D006 : int 3 2
> z2 <- z[,!(colnames(z) %in% c("LA003","SA005"))]
> str(z2)
'data.frame': 2 obs. of 5 variables:
$ Part: int 1 2
$ Q001: int 5 3
$ Q002: int 3 NA
$ Q004: int NA 2
$ D006: int 3 2
> z2$Q001
[1] 5 3
> class(z2$Q001)
[1] "integer"
The only I can think of is that your second command (which was missing some terminating parentheses and brackets) didn't work at all, you missed seeing the error message, and you are referring to some previously defined data object that doesn't have the same columns defined. For example, class(z$QQQ) is NULL following the above example.
edit: it appears that the original problem was some weird/garbage characters in the header that messed up the name of the first column. Manually renaming the column (names(data)[1] <- "Q001") seems to have fixed the problem.

How do I handle multiple kinds of missingness in R?

Many surveys have codes for different kinds of missingness. For instance, a codebook might indicate:
0-99 Data
-1 Question not asked
-5 Do not know
-7 Refused to respond
-9 Module not asked
Stata has a beautiful facility for handling these multiple kinds of missingness, in that it allows you to assign a generic . to missing data, but more specific kinds of missingness (.a, .b, .c, ..., .z) are allowed as well. All the commands which look at missingness report answers for all the missing entries however specified, but you can sort out the various kinds of missingness later on as well. This is particularly helpful when you believe that refusal to respond has different implications for the imputation strategy than does question not asked.
I have never run across such a facility in R, but I would really like to have this capability. Are there any ways of marking several different types of NA? I could imagine creating more data (either a vector of length nrow(my.data.frame) containing the types of missingness, or a more compact index of which rows had what types of missingness), but that seems pretty unwieldy.
I know what you look for, and that is not implemented in R. I have no knowledge of a package where that is implemented, but it's not too difficult to code it yourself.
A workable way is to add a dataframe to the attributes, containing the codes. To prevent doubling the whole dataframe and save space, I'd add the indices in that dataframe instead of reconstructing a complete dataframe.
eg :
NACode <- function(x,code){
Df <- sapply(x,function(i){
i[i %in% code] <- NA
i
})
id <- which(is.na(Df))
rowid <- id %% nrow(x)
colid <- id %/% nrow(x) + 1
NAdf <- data.frame(
id,rowid,colid,
value = as.matrix(x)[id]
)
Df <- as.data.frame(Df)
attr(Df,"NAcode") <- NAdf
Df
}
This allows to do :
> Df <- data.frame(A = 1:10,B=c(1:5,-1,-2,-3,9,10) )
> code <- list("Missing"=-1,"Not Answered"=-2,"Don't know"=-3)
> DfwithNA <- NACode(Df,code)
> str(DfwithNA)
'data.frame': 10 obs. of 2 variables:
$ A: num 1 2 3 4 5 6 7 8 9 10
$ B: num 1 2 3 4 5 NA NA NA 9 10
- attr(*, "NAcode")='data.frame': 3 obs. of 4 variables:
..$ id : int 16 17 18
..$ rowid: int 6 7 8
..$ colid: num 2 2 2
..$ value: num -1 -2 -3
The function can also be adjusted to add an extra attribute that gives you the label for the different values, see also this question. You could backtransform by :
ChangeNAToCode <- function(x,code){
NAval <- attr(x,"NAcode")
for(i in which(NAval$value %in% code))
x[NAval$rowid[i],NAval$colid[i]] <- NAval$value[i]
x
}
> Dfback <- ChangeNAToCode(DfwithNA,c(-2,-3))
> str(Dfback)
'data.frame': 10 obs. of 2 variables:
$ A: num 1 2 3 4 5 6 7 8 9 10
$ B: num 1 2 3 4 5 NA -2 -3 9 10
- attr(*, "NAcode")='data.frame': 3 obs. of 4 variables:
..$ id : int 16 17 18
..$ rowid: int 6 7 8
..$ colid: num 2 2 2
..$ value: num -1 -2 -3
This allows to change only the codes you want, if that ever is necessary. The function can be adapted to return all codes when no argument is given. Similar functions can be constructed to extract data based on the code, I guess you can figure that one out yourself.
But in one line : using attributes and indices might be a nice way of doing it.
The most obvious way seems to use two vectors:
Vector 1: a data vector, where all missing values are represented using NA. For example, c(2, 50, NA, NA)
Vector 2: a vector of factors, indicating the type of data. For example, factor(c(1, 1, -1, -7)) where factor 1 indicates the a correctly answered question.
Having this structure would give you a create deal of flexibility, since all the standard na.rm arguments still work with your data vector, but you can use more complex concepts with the factor vector.
Update following questions from #gsk3
Data storage will dramatically increase: The data storage will double. However, if doubling the size causes real problem it may be worth thinking about other strategies.
Programs don't automatically deal with it. That's a strange comment. Some functions by default handle NAs in a sensible way. However, you want to treat the NAs differently so that implies that you will have to do something bespoke. If you want to just analyse data where the NA's are "Question not asked", then just use a data frame subset.
now you have to manipulate two vectors together every time you want to conceptually manipulate a variable I suppose I envisaged a data frame of the two vectors. I would subset the data frame based on the second vector.
There's no standard implementation, so my solution might differ from someone else's. True. However, if an off the shelf package doesn't meet your needs, then (almost) by definition you want to do something different.
I should state that I have never analysed survey data (although I have analysed large biological data sets). My answers above appear quite defensive, but that's not my intention. I think your question is a good one, and I'm interested in other responses.
This is more than just a "technical" issue. You should have a thorough statistical background in missing value analysis and imputation. One solution requires playing with R and ggobi. You can assign extremely negative values to several types of NA (put NAs into margin), and do some diagnostics "manually". You should bare in mind that there are three types of NA:
MCAR - missing completely at random, where P(missing|observed,unobserved) = P(missing)
MAR - missing at random, where P(missing|observed,unobserved) = P(missing|observed)
MNAR - missing not at random (or non-ignorable), where P(missing|observed,unobserved) cannot be quantified in any way.
IMHO this question is more suitable for CrossValidated.
But here's a link from SO that you may find useful:
Handling missing/incomplete data in R--is there function to mask but not remove NAs?
You can dispense with NA entirely and just use the coded values. You can then also roll them up to a global missing value. I often prefer to code without NA since NA can cause problems in coding and I like to be able to control exactly what is going into the analysis. If have also used the string "NA" to represent NA which often makes things easier.
-Ralph Winters
I usually use them as values, as Ralph already suggested, since the type of missing value seems to be data, but on one or two occasions where I mainly wanted it for documentation I have used an attribute on the value, e.g.
> a <- NA
> attr(a, 'na.type') <- -1
> print(a)
[1] NA
attr(,"na.type")
[1] -1
That way my analysis is clean but I still keep the documentation. But as I said: usually I keep the values.
Allan.
I´d like to add to the "statistical background component" here. Statistical analysis with missing data is a very good read on this.

Resources