How to fill in observations using other observations R or Stata - r
I have a dataset like this:
ID dum1 dum2 dum3 var1
1 0 1 . hi
1 0 . 0 hi
2 1 . . bye
2 0 0 1 .
What I'm trying to do is that I want to fill in information based on the same ID if observations are missing. So my end product would be something like:
ID dum1 dum2 dum3 var1
1 0 1 0 hi
1 0 1 0 hi
2 1 0 1 bye
2 0 0 1 bye
Is there any way I can do this in R or Stata?
This continues discussion of Stata solutions. The solution by #Pearly Spencer looks backward and forward from observations with missing values and so is fine for the example with just two observations per group, and possibly fine for some other situations.
An alternative approach makes use, as appropriate, of the community-contributed commands mipolate and stripolate from SSC as explained also at https://www.statalist.org/forums/forum/general-stata-discussion/general/1308786-mipolate-now-available-from-ssc-new-program-for-interpolation
Examples first, then commentary:
clear
input ID dum1a dum2a dum3a str3 var1a
1 0 1 . "hi"
1 0 . 0 "hi"
2 1 . . "bye"
2 0 0 1 ""
2 0 1 . ""
end
gen long obsno = _n
foreach v of var dum*a {
quietly count if missing(`v')
if r(N) > 0 capture noisily mipolate `v' obsno, groupwise by(ID) generate(`v'_2)
}
foreach v of var var*a {
quietly count if missing(`v')
if r(N) > 0 capture noisily stripolate `v' obsno, groupwise by(ID) generate(`v'_2)
}
list
+----------------------------------------------------------------+
| ID dum1a dum2a dum3a var1a obsno dum3a_2 var1a_2 |
|----------------------------------------------------------------|
1. | 1 0 1 . hi 1 0 hi |
2. | 1 0 . 0 hi 2 0 hi |
3. | 2 1 . . bye 3 1 bye |
4. | 2 0 0 1 4 1 bye |
5. | 2 0 1 . 5 1 bye |
+----------------------------------------------------------------+
Notes:
The groupwise option of mipolate and stripolate uses the rule: replace missing values within groups with the non-missing value in that group if and only if there is only one distinct non-missing value in that group. Thus if the non-missing values in a group are all 1, or all 42, or whatever it is, then interpolation uses 1 or 42 or whatever it is. If the non-missing values in a group are 0 and 1, then no go.
The variable obsno created here plays no role in that interpolation and is needed solely to match the general syntax of mipolate.
There is no assumption here that groups consist of just two observations or have the same number of observations. A common playground for these problems is data on families whenever some variables were recorded only for certain family members but it is desired to spread the values recorded to other family members. Naturally, in real data families often have more than two members and the number of family members will vary.
This question exposed a small bug in mipolate, groupwise and stripolate, groupwise: it doesn't exit as appropriate if there is nothing to do, as in dum1a where there are no missing values. In the code above, this is trapped by asking for interpolation if and only if missing values are counted. At some future date, the bug will be fixed and the code in this answer simplified accordingly, or so I intend as program author.
mipolate, groupwise and stripolate, groupwise both exit with an error message if any group is found with two or more distinct non-missing values; no interpolation is then done for any groups, even if some groups are fine. That is the point of the code capture noisily: the error message for dum2a is not echoed above. As program author I am thinking of adding an option whereby such groups will be ignored but that interpolation will take place for groups with just one distinct non-missing value.
Assuming your data is in df
library(dplyr)
df %>%
group_by(ID) %>%
mutate(dum1=dum1[dum1!="."][1],
dum2=dum2[dum2!="."][1],
dum3=dum3[dum3!="."][1],
var1=var1[var1!="."][1])
Using your toy example:
clear
input ID dum1a dum2a dum3a str3 var1a
1 0 1 . "hi"
1 0 . 0 "hi"
2 1 . . "bye"
2 0 0 1 "."
end
replace var1a = "" if var1a == "."
sort ID (dum2a)
list
+------------------------------------+
| ID dum1a dum2a dum3a var1a |
|------------------------------------|
1. | 1 0 1 . hi |
2. | 1 0 . 0 hi |
3. | 2 0 0 1 |
4. | 2 1 . . bye |
+------------------------------------+
In Stata you can do the following:
ds ID, not
local varlist `r(varlist)'
foreach var of local varlist {
generate `var'b = `var'
bysort ID (`var'): replace `var'b = cond(!missing(`var'[_n-1]), `var'[_n-1], ///
`var'[_n+1]) if missing(`var')
}
list ID dum?ab var?ab
+----------------------------------------+
| ID dum1ab dum2ab dum3ab var1ab |
|----------------------------------------|
1. | 1 0 1 0 hi |
2. | 1 0 1 0 hi |
3. | 2 0 0 1 bye |
4. | 2 1 0 1 bye |
+----------------------------------------+
Related
Does the stem function in R handle large counts correctly?
I am wondering if the stem function in R is producing the stem and leaf plot correctly for this example. The code X <- c(rep(1,1000),2:15) stem(X,width = 20) produces the output The decimal point is at the | 1 | 00000000+980 2 | 0 3 | 0 4 | 0 5 | 0 6 | 0 7 | 0 8 | 0 9 | 0 10 | 0 11 | 0 12 | 0 13 | 0 14 | 0 15 | 0 There are 1000 ones in the data, but the output of the stem function seems to indicate that there are 988 ones (if you count the zeros in the first row and add 980). Instead of +980, I think it should display +992 at the end of the first row. Is there an error in the stem function or am I not reading the output correctly?
Calculating mean grade of students' peers
I have one dataset which includes all the points of students and other variables. I further have a diagonal matrix which includes information on which student is a peer of another student. Now I would like to use the second matrix (network) to calculate the mean-peer-points for each student. Everyone can have different (number of) peers. To calculate the mean, I recalculated the simple 0,1 matrix into percentages, whereby the denominator is the sum of the number of peers one student has. The second matrix then would look something like this: ID1 ID2 ID3 ID4 ID5 ID1 0 0 0 0 1 ID2 0 0 0.5 0.5 0 ID3 0 0.5 0 0 0.5 ID4 0 0.5 0 0 0.5 ID5 0.33 0 0.33 0.33 0 And the points of each students is a simple variable in another dataset, and I would like to have the peers-average-points in as a second variable: ID Points Peers ID1 45 11 ID2 42 33.5 ID3 25 26.5 ID4 60 26.5 ID5 11 43.33 Are there any commands in Stata for that problem? I am currently looking into the Stata commands nwcommands, but I am unsure whether it can help. I could use solutions for Stata and R.
Without getting too creative, you can accomplish what you are trying to do with reshape, collapse and a couple of merges in Stata. Generally speaking, data in long format is easier to work with for this type of exercise. Below is an example which produces the desired result. /* Set-up data for example */ clear input int(id points) 1 45 2 42 3 25 4 60 5 11 end tempfile points save `points' clear input int(StudentId id1 id2 id3 id4 id5) 1 0 0 0 0 1 2 0 0 1 1 0 3 0 1 0 0 1 4 0 1 0 0 1 5 1 0 1 1 0 end /* End data set-up */ * Reshape peers data to long form reshape long id, i(Student) j(PeerId) drop if id == 0 // drop if student is not a peer of `StudentId` * create id variable to use in merge replace id = PeerId * Merge to points data to get peer points merge m:1 id using `points', nogen * collapse data to the student level, sum peer points collapse (sum) PeerPoints = points (count) CountPeers = PeerId, by(StudentId) * merge back to points data to get student points rename StudentId id merge 1:1 id using `points', nogen gen peers = PeerPoints / CountPeers li id points peers +------------------------+ | id points peers | |------------------------| 1. | 1 45 11 | 2. | 2 42 42.5 | 3. | 3 25 26.5 | 4. | 4 60 26.5 | 5. | 5 11 43.33333 +------------------------+ In the above code, I reshape your peer data into long form data and keep only student-peer pairs. I then merge this data to the points data to get the points of each students peers. From here, I collapse the data back to the student level, totaling peer points and peer count in the process. At this point, you have total points for the peers of each student and the number of peers each student has. Now, you simply have to merge back to the points data to get the subject students points and divide total peer points (PeerPoints) by the number of peers the student has (CountPeers) for average peer points.
nwcommands is an outstanding package I have never used or studied, so I will just try the problem from first principles. This is all matrix algebra, but given a matrix and a variable, I would approach it like this in Stata. clear scalar third = 1/3 mat M = (0,0,0,0,1\0,0,0.5,0.5,0\0,0.5,0,0,0.5\0,0.5,0,0,0.5\third,0,third,third,0) input ID Points Peers 1 45 11 2 42 33.5 3 25 26.5 4 60 26.5 5 11 43.33 end gen Wanted = 0 quietly forval i = 1/5 { forval j = 1/5 { replace Wanted = Wanted + M[`i', `j'] * Points[`j'] in `i' } } list +--------------------------------+ | ID Points Peers Wanted | |--------------------------------| 1. | 1 45 11 11 | 2. | 2 42 33.5 42.5 | 3. | 3 25 26.5 26.5 | 4. | 4 60 26.5 26.5 | 5. | 5 11 43.33 43.33334 | +--------------------------------+ Small points: Using 0.33 for 1/3 doesn't give enough precision. You'll have similar problems for 1/6 and 1/7, for example. Also, I get that the peers of 2 are 3 and 4 so their average is (25 + 60)/2 = 42.5, not 33.5. EDIT: A similar approach starts with a data structure very like that imagined by #ander2ed clear input int(id points id1 id2 id3 id4 id5) 1 45 0 0 0 0 1 2 42 0 0 1 1 0 3 25 0 1 0 0 1 4 60 0 1 0 0 1 5 11 1 0 1 1 0 end gen wanted = 0 quietly forval i = 1/5 { forval j = 1/5 { replace wanted = wanted + id`j'[`i'] * points[`j'] in `i' } } egen count = rowtotal(id1-id5) replace wanted = wanted/count list +--------------------------------------------------------------+ | id points id1 id2 id3 id4 id5 wanted count | |--------------------------------------------------------------| 1. | 1 45 0 0 0 0 1 11 1 | 2. | 2 42 0 0 1 1 0 42.5 2 | 3. | 3 25 0 1 0 0 1 26.5 2 | 4. | 4 60 0 1 0 0 1 26.5 2 | 5. | 5 11 1 0 1 1 0 43.33333 3 | +--------------------------------------------------------------+
Get frequency counts for a subset of elements in a column
I may be missing some elegant ways in Stata to get to this example, which has to do with electrical parts and observed monthly failures etc. clear input str3 (PartID Type FailType) ABD A 4 BBB S 0 ABD A 3 ABD A 4 ABC A 2 BBB A 0 ABD B 1 ABC B 7 BBB C 1 BBB D 0 end I would like to group by (bysort) each PartID and record the highest frequency for FailType within each PartID type. Ties can be broken arbitrarily, and preferably, the lower one can be picked. I looked at groups etc., but do not know how to peel off certain elements from the result set. So that is a major question for me. If you execute a query, how do you select only the elements you want for the next computation? Something like n(0) is the count, n(1) is the mean etc. I was able to use contract, bysort etc. and create a separate data set which I then merged back into the main set with an extra column There must be something simple using gen or egen so that there is no need to create an extra data set. The expected results here will be: PartID Freq ABD 4 #(4 occurs twice) ABC 2 #(tie broken with minimum) BBB 0 #(0 occurs 3 times) Please let me know how I can pick off specific elements that I need from a result set (can be from duplicate reports, tab etc.) Part II - Clarification: Perhaps I should have clarified and split the question into two parts. For example, if I issue this followup command after running your code: tabdisp Type, c(Freq). It may print out a nice table. Can I then use that (derived) table to perform more computations programatically? For example get the first row of the table. Table. ---------------------- Type| Freq ----------+----------- A | -1 B | -1 C | -1 D | -3 S | -3 ---------------------- –
I found this difficult to follow (see comment on question), but some technique is demonstrated here. The numbers of observations in subsets of observations defined by by: are given by _N. The rest is sorting tricks. Negating the frequency is a way to select the highest frequency and the lowest Type which I think is what you are after when splitting ties. Negating back gets you the positive frequencies. clear input str3 (PartID Type FailType) ABD A 4 BBB S 0 ABD A 3 ABD A 4 ABC A 2 BBB A 0 ABD B 1 ABC B 7 BBB C 1 BBB D 0 end bysort PartID FailType: gen Freq = -_N bysort PartID (Freq Type) : gen ToShow = _n == 1 replace Freq = -Freq list PartID Type FailType Freq if ToShow +---------------------------------+ | PartID Type FailType Freq | |---------------------------------| 1. | ABC A 2 1 | 3. | ABD A 4 2 | 7. | BBB A 0 3 | +---------------------------------+
Is preprocessing file with awk needed or it can be done directly in R?
I used to process csv file with awk, here is my 1st script: tail -n +2 shifted_final.csv | awk -F, 'BEGIN {old=$2} {if($2!=old){print $0; old=$2;}}' | less this script looks for repeating values in 2nd column (if value on line n is same as on line n+1, n+2 ...) and prints only first occurrence. For example if you feed following input: ord,orig,pred,as,o-p 1,0,0,1.0,0 2,0,0,1.0,0 3,0,0,1.0,0 4,0,0,0.0,0 5,0,0,0.0,0 6,0,0,0.0,0 7,0,0,0.0,0 8,0,0,0.0,0 9,0,0,0.0,0 10,0,0,0.0,0 11,0,0,0.0,0 12,0,0,0.0,0 13,0,0,0.0,0 14,0,0,0.0,0 15,0,0,0.0,0 16,0,0,0.0,0 17,0,0,0.0,0 18,0,0,0.0,0 19,0,0,0.0,0 20,0,0,0.0,0 21,0,0,0.0,0 22,0,0,0.0,0 23,4,0,0.0,4 24,402,0,1.0,402 25,0,0,1.0,0 Then the output will be: 1,0,0,1.0,0 23,4,0,0.0,4 24,402,0,1.0,402 25,0,0,1.0,0 EDIT: I've made this a bit challenging adding 2nd script: The second script does the same but prints last duplicate occurrence: tail -n +2 shifted_final.csv | awk -F, 'BEGIN {old=$2; line=$0} {if($2==old){line=$0}else{print line; old=$2; line=$0}} END {print $0}' | less It's output will be: 22,0,0,0.0,0 23,4,0,0.0,4 24,402,0,1.0,402 25,0,0,1.0,0 I suppose R is powerful language which should handle such tasks, but I've found only questions regarding calling awk scripts from R etc. How to do this in R?
Regarding the update to your question, a more general solution, thanks to #nicola: Idx.first <- c(TRUE, tbl$orig[-1] != tbl$orig[-nrow(tbl)]) ## R> tbl[Idx.first,] # ord orig pred as o.p # 1 1 0 0 1 0 # 23 23 4 0 0 4 # 24 24 402 0 1 402 # 25 25 0 0 1 0 If you want to use the last occurrence of a value in a run, rather than the first, just append TRUE to #nicola's indexing expression instead of prepending it: Idx.last <- c(tbl$orig[-1] != tbl$orig[-nrow(tbl)], TRUE) ## R> tbl[Idx.last,] # ord orig pred as o.p # 22 22 0 0 0 0 # 23 23 4 0 0 4 # 24 24 402 0 1 402 # 25 25 0 0 1 0 In either case, tbl$orig[-1] != tbl$orig[-nrow(tbl)] is comparing the 2nd through nth values in column 2 with the 1st through n-1th values in column 2. The result is a logical vector, where TRUE elements indicate a change in consecutive values. Since the comparison is of length n-1, pushing an extra TRUE value to the front (case 1) will select the first occurrence in a run, whereas adding an extra TRUE to the back (case 2) will select the last occurrence in a run. Data: tbl <- read.table(text = "ord,orig,pred,as,o-p 1,0,0,1.0,0 2,0,0,1.0,0 3,0,0,1.0,0 4,0,0,0.0,0 5,0,0,0.0,0 6,0,0,0.0,0 7,0,0,0.0,0 8,0,0,0.0,0 9,0,0,0.0,0 10,0,0,0.0,0 11,0,0,0.0,0 12,0,0,0.0,0 13,0,0,0.0,0 14,0,0,0.0,0 15,0,0,0.0,0 16,0,0,0.0,0 17,0,0,0.0,0 18,0,0,0.0,0 19,0,0,0.0,0 20,0,0,0.0,0 21,0,0,0.0,0 22,0,0,0.0,0 23,4,0,0.0,4 24,402,0,1.0,402 25,0,0,1.0,0", header = TRUE, sep = ",")
For the (updated) question, you could use for example (thanks to #nrussell for his comment and suggestion): idx <- c(1, cumsum(rle(tbl[,2])[[1]])[-1]) tbl[idx,] # ord orig pred as o.p x #1 1 0 0 1 0 1 #23 23 4 0 0 4 2 #24 24 402 0 1 402 3 #25 25 0 0 1 0 4 It will return the first row of each 'block' of identical values in column orig. rle(tbl[,2])[[1]] computes the run lengths of each new (different than previous) value that appears in column orig cumsum(...) computes the cumulative sum of those run lengths finally, c(1, cumsum(...)[-1]) replaces the first number in that vector with a 1, so that the very first line of the data will always be present
Construct new variable from >3 categorical variables (+maintain column names) for mosaic plot in Stata
My question is an extension of that found here: Construct new variable from given 5 categorical variables in Stata I am an R user and I have been struggling to adjust to the Stata syntax. Also, I'm use to being able to Google for R documentation/examples online and haven't found as many resources for Stata so I've come here. I have a data set where the rows represent individual people and the columns record various attributes of these people. There are 5 categorical variables (white, hispanic, black, asian, other) that have binary response data, 0 or 1 ("No" or "Yes"). I want to create a mosaic plot of race vs response data using the spineplots package. However, I believe I must first combine all 5 of the categorical variables into a categorical variable with 5 levels that maintains the labels (so I can see the response rate for each ethnicity.) I've been playing around with the egen function but haven't been able to get it to work. Any help would be appreciated. Edit: Added a depiction of what my data looks like and what I want it to look like. my data right now: person_id,black,asian,white,hispanic,responded 1,0,0,1,0,0 2,1,0,0,0,0 3,1,0,0,0,1 4,0,1,0,0,1 5,0,1,0,0,1 6,0,1,0,0,0 7,0,0,1,0,1 8,0,0,0,1,1 what I want is to produce a table through the tabulate command to make the following: respond, black, asian, white, hispanic responded to survey | 20, 30, 25, 10, 15 did not respond | 15, 20, 21, 23, 33
It seems like you want a single indicator variable rather than multiple {0,1} dummies. The easiest way is probably with a loop; another option is to use cond() to generate a new indicator variable (note that you may want to catch respondents for whom all the race dummies are 0 in an 'other' group), label its values (and the values of responded), and then create your frequency table: clear input person_id black asian white hispanic responded 1 0 0 1 0 0 2 1 0 0 0 0 3 1 0 0 0 1 4 0 1 0 0 1 5 0 1 0 0 1 6 0 1 0 0 0 7 0 0 1 0 1 8 0 0 0 1 1 9 0 0 0 0 1 end gen race = "other" foreach v of varlist black asian white hispanic { replace race = "`v'" if `v' == 1 } label define race2 1 "asian" 2 "black" 3 "hispanic" 4 "white" 99 "other" gen race2:race2 = cond(black == 1, 1, /// cond(asian == 1, 2, /// cond(white == 1, 3, /// cond(hispanic == 1, 4, 99)))) label define responded 0 "did not respond" 1 "responded to survey" label values responded responded tab responded race with the result | race responded | asian black hispanic other white | Total --------------------+-------------------------------------------------------+---------- did not respond | 1 1 0 0 1 | 3 responded to survey | 2 1 1 1 1 | 6 --------------------+-------------------------------------------------------+---------- Total | 3 2 1 1 2 | 9 tab responded race2 yields the same results with a different ordering (by the actual values of race2 rather than the alphabetical ordering of the value labels).