So, I´m working with ENIGH - Database, which stands for ¨National Survey of Household Income and Expenses¨ in Spanish, this is an exercise conducted by the Mexican government and like most surveys of its kind, it works with Weights.
What I´m trying to do is to calculate the mean, maximum and minimum household income by Decile. In other words What´s the income of each 10%, grouping household base on their income.
To be honest, I haven’t gone that far but this is what I got until now:
I need my svydesign object
Convert that into a table using svytable
Arrange using desc() on my income variable
ENIGH_design <-svydesign(id=~upm, strata=~est_dis, weights=~factor_hog, data = ENIGH)
ENIGH_table <- svytable(ing_cor, ENIGH_design)
Here is where it gets tricky, supposing I have 100 rows, I can’t take the first 10 of them because in reality, when taking weights in mind, the might be 9% or 20% (I´m just throwing numbers) of the actual population.
I could use cut() on my income variable but I would be forgetting about weights and results will only be representative of the sample, not total population.
I think that the best approach would be to use a combination of:
mutate() to create a new variable base
if() in conjugation with mutate to define on which decile each row falls to
group_by() and mean() to calculate what I´m aiming for
This way I will have an extra variable which I could use to calculate whatever I want with whatever other variable I wish to. But again, I haven´t define my groups so it´s pretty much useless.
Thank you for reading. Thank you for your help.
Database available: https://www.inegi.org.mx/programas/enigh/nc/2016/default.html#Datos_abiertos
Here is a glimpse of how my DB looks:
folioviv foliohog ubica_geo est_dis upm factor ing_cor
100587003 1 10010000 2 610 180 22,723
100587004 1 10010000 2 610 180 17,920
100587005 1 10010000 2 610 180 27,506
100587006 1 10010000 2 610 180 56,236
100605201 1 10010000 2 620 178 41,587
100605202 1 10010000 2 620 178 135,437
100605203 1 10010000 2 620 178 62,386
100605205 1 10010000 2 620 178 103,502
100605206 1 10010000 2 620 178 27,323
100606301 1 10010000 3 630 223 68,042
100606302 1 10010000 3 630 223 98,537
100606305 1 10010000 3 630 223 53,237
100606306 1 10010000 3 630 223 132,861
100609801 1 10010000 3 640 232 190,033
100609802 1 10010000 3 640 232 28,654
100609805 1 10010000 3 640 232 74,408
100631401 1 10010000 1 650 171 80,761
100711503 1 10010000 1 770 184 38,640
100711504 1 10010000 1 770 184 81,672
There are many more columns but they aren´t necessary for this exercise.
Make a table (dataframe or data.table or tibble) that looks like this:
> dt
folioviv factor ing_tri
1 247 30000
2 200 15000
3 150 50000
incomes <- rep(dt$ing_tri, times = dt$factor)
deciles <- quantile(incomes, probs = seq(0.1, 1, by = 0.1), names = TRUE)
If I were you, I would try with names = FALSE to make it manipulable. Otherwise, it will be a named list and that's a bit annoying.
Oh, and in case you want to compute the mean, just do mean(incomes).
PS: The column folioviv is not actually necessary, but you may want to put it there just in case.
Related
I am working with dplyr and sample_n in R and trying to get an even group of rows to work with in my data frame.
So, I have a data set, head of data as follows:
> head(SEH)
Time.Level Demo.Age SEH.Total
92 PRE 12 110
335 PRE 12 80
720 MID 14 85
196 MID 11 95
408 POST 18 60
184 POST 10 99
I separated out the data into three different data frames according to time level. So I have a SEH.pre, an SEH.mid and an SEH.post. I then do a describe and I know I have uneven groups of pre, mid, post. So, I want to random sample out pre, mid, post groups to be an even size. For example, I have the SEH.pre and SEH.mid group n sizes below:
> describe(SEH.pre)
vars n
Time.Level* 1 887
Demo.Age 2 883
SEH.Total 3 887
> describe(SEH.mid)
vars n
Time.Level* 1 894
Demo.Age 2 872
SEH.Total 3 894
So, now I run sample_n on the SEH.pre thinking that I can re-sample to an n of 860 across all columns. I run the following command:
SEH.pre2 <- sample_n(SEH.pre, 860, replace = FALSE)
And then I describe and the Demo.Age is less than the rest:
> describe(SEH.pre2)
vars n ...
Time.Level* 1 860
Demo.Age 2 856
SEH.Total 3 860
I feel like a big idiot but I cannot figure out why this is. I have tried it multiple times and Demo.Age varies from 856 to 859, but is never 860. I want all three columns to be 860. How do I do this? And why am I mis-thinking that sample_n should create even groups out of uneven?
I have a data set where I have the Levels and Trends for say 50 cities for 3 scenarios. Below is the sample data -
City <- paste0("City",1:50)
L1 <- sample(100:500,50,replace = T)
L2 <- sample(100:500,50,replace = T)
L3 <- sample(100:500,50,replace = T)
T1 <- runif(50,0,3)
T2 <- runif(50,0,3)
T3 <- runif(50,0,3)
df <- data.frame(City,L1,L2,L3,T1,T2,T3)
Now, across the 3 scenarios I find the minimum Level and Minimum Trend using the below code -
df$L_min <- apply(df[,2:4],1,min)
df$T_min <- apply(df[,5:7],1,min)
Now I want to check if these minimum values are significantly different between the levels and trends respectively. So check L_min with columns 2-4 and T_min with columns 5-7. This needs to be done for each city (row) and if significant then return which column it is significantly different with.
It would help if some one could guide how this can be done.
Thank you!!
I'll put my idea here, nevertheless I'm looking forward for ideas for others.
> head(df)
City L1 L2 L3 T1 T2 T3 L_min T_min
1 City1 251 176 263 1.162313 0.07196579 2.0925715 176 0.07196579
2 City2 385 406 264 0.353124 0.66089524 2.5613980 264 0.35312402
3 City3 437 333 426 2.625795 1.43547766 1.7667891 333 1.43547766
4 City4 431 405 493 2.042905 0.93041254 1.3872058 405 0.93041254
5 City5 101 429 100 1.731004 2.89794314 0.3535423 100 0.35354230
6 City6 374 394 465 1.854794 0.57909775 2.7485841 374 0.57909775
> df$FC <- rowMeans(df[,2:4])/df[,8]
> df <- df[order(-df$FC), ]
> head(df)
City L1 L2 L3 T1 T2 T3 L_min T_min FC
18 City18 461 425 117 2.7786757 2.6577894 0.75974121 117 0.75974121 2.857550
38 City38 370 117 445 0.1103141 2.6890014 2.26174542 117 0.11031411 2.655271
44 City44 101 473 222 1.2754675 0.8667007 0.04057544 101 0.04057544 2.627063
10 City10 459 361 132 0.1529519 2.4678493 2.23373484 132 0.15295194 2.404040
16 City16 232 393 110 0.8628494 1.3995549 1.01689217 110 0.86284938 2.227273
15 City15 499 475 182 0.3679611 0.2519497 2.82647041 182 0.25194969 2.117216
Now you have the most different rows based on columns 2:4 at the top. Columns 5:7 in analogous way.
And some tips for stastical tests:
Always use t.test(parametrical, based on mean) instead of wilcoxon(u-mann whitney - non-parametrical, based on median), it has more power; HOWEVER:
-Data sets should be big ex. hipotesis: Montreal has taller citizens than Quebec; t.test will work fine when you take a 100 people from each city, so we have height measurment of 200 people 100 vs 100.
-Distribution should be close to normal distribution in all samples; or both samples should have similar distribution far from normal - it may be binominal. Anyway we can't use this test when one sample has normal distribution, and second hasn't.
-Size of both samples should be eqal, so 100 vs 100 is ok, but 87 vs 234 not exactly, p-value will be below 0.05, however it may be misrepresented.
If your data doesn't meet above conditions, I prefer non-parametrical test, less power but more resistant.
Thank you jakub and Hack-R!
Yes, these are my actual data. The data I am starting from are the following:
[A] #first, longer dataset
CODE_t2 VALUE_t2
111 3641
112 1691
121 1271
122 185
123 522
124 0
131 0
132 0
133 0
141 626
142 170
211 0
212 0
213 0
221 0
222 0
223 0
231 95
241 0
242 0
243 0
244 0
311 129
312 1214
313 0
321 0
322 0
323 565
324 0
331 0
332 0
333 0
334 0
335 0
411 0
412 0
421 0
422 0
423 0
511 6
512 0
521 0
522 0
523 87
In the above table, we can see the 44 land use CODES (which I inappropriately named "class" in my first entry) for a certain city. Some values are just 0, meaning that there are no land uses of that type in that city.
Starting from this table, which displays all the land use types for t2 and their corresponding values ("VALUE_t2") I have to reconstruct the previous amount of land uses ("VALUE_t1") per each type.
To do so, I have to add and subtract the value per each land use (if not 0) by using the "change land use table" from t2 to t1, which is the following:
[B] #second, shorter dataset
CODE_t2 CODE_t1 VALUE_CHANGE1
121 112 2
121 133 12
121 323 0
121 511 3
121 523 2
123 523 4
133 123 3
133 523 4
141 231 12
141 511 37
So, in order to get VALUE_t1 from VALUE_t2, I have, for instance, to subtract 2 + 12 + 0 + 3 + 2 hectares (first 5 values of the second, shorter table) from the value of land use type/code 121 of the first, longer table (1271 ha), and add 2 hectares to land type 112, 12 hectares to land type 133, 3 hectares to land type 511 and 2 hectares to land type 523. And I have to do that for all the land use types different than 0, and later also from t1 to t0.
What I have to do is a sort of loop that would both add and subtract, per each land use type/code, the values from VALUE_t2 to VALUE_t1, and from VALUE_t1 to VALUE_t0.
Once I estimated VALUE_t1 and VALUE_t0, I will put the values in a simple table showing the relative variation (here the values are not real):
CODE VALUE_t0 VALUE_t2 % VAR t2-t0
code1 50 100 ((100-50)/50)*100
code2 70 80 ((80-70)/70)*100
code3 45 34 ((34-45)/45)*100
What I could do so far is:
land_code <- names(A)[-1]
land_code
A$VALUE_t1 <- for(code in land_code{
cbind(A[1], A[land_code] - B[match(A$CODE_t2, B$CODE_t2), land_code])
}
If I use the loop I get an error, while if I take it away:
A$VALUE_t1 <- cbind(A[1], A[land_code] - B[match(A$CODE_t2, B$CODE_t2), land_code])
it works but I don't really get what I want to get... so far I was working on how to get a new column which would contain the new "add & subtract" values, but haven't succeeded yet. So I worked on how to get a new column which would at least match the land use types first, to then include the "add and subtract" formula.
Another problem is that, by using "match", I get a shorter A$VALUE_t1 table (13 rows instead of 44), while I would like to keep all the land use types in dataset A, because I will have then to match it with the table including VALUES_t0 (which I haven't shown here).
Sorry that I cannot do better than this at the moment... and I hope to have explained better what I have to do. I am extremely grateful for any help you can provide to me.
thanks a lot
I have a data frame like this:
id status
241 1
451 3
748 3
469 2
102 3
100 1
203 2
Now what I want to do is this:
1 corresponds to 'good' , 2 corresponds to 'moderate', 3 corresponds to 'bad'.
So my output should be like this:
id status
241 good
451 bad
748 bad
469 moderate
102 bad
100 good
203 moderate
How to do this ? I tried to do this using if else but it is getting complicated.
It sounds like you want a labelled factor. You can try:
df$status <- factor(df$status, labels=c('Good','Moderate','Bad'))
> df
id status
1 241 Good
2 451 Bad
3 748 Bad
4 469 Moderate
5 102 Bad
6 100 Good
7 203 Moderate
It depends on what is the type of status column. If they are not factors, you can do(as pointed out by Pascal)
level<-c("Good","Moderate","Bad")
df$status<-level[df$status]
data
df<-data.frame(item=c("apple","banana","orange","papaya","mango"),grade=c(1,3,2,1,3),stringsAsFactors=FALSE)
And if it is set as factors, you may(as pointed out by Jay)
df$status<-factor(df$status, labels=c('Good','Moderate','Bad'))
data
df<-data.frame(item=c("apple","banana","orange","papaya","mango"),grade=c(1,3,2,1,3),stringsAsFactors=TRUE)
This is a part of my data: (The actual data contains about 10,000 observations with about 500 levels of SalesItem)
s1<-c('1008','1009','1012','1013','1016','1017','1018','1019','1054','1055')
s2<-c(155,153,154,150,176,165,159,143,179,150)
S<-data.frame(SalesItem=factor(s1), Sales=s2)
> str(S)
'data.frame': 10 obs. of 2 variables:
$ SalesItem: Factor w/ 10 levels "1008","1009",..: 1 2 3 4 5 6 7 8 9 10
$ Sales : num 155 153 154 150 176 165 159 143 179 150`
What I want to do is, if diff(SalesItem)=1, I want to combine the level of SalesItem into 1, for example: diff between SalesItem 1008 and 1009 equal to one, so, I want to rename SalesItem 1009 to 1008. So, later I can compute the sum of Sales for this SalesItem as one, because of my actual data=10,000, so, it is quite hard for me to do this one by one.
Is there any simplest way for me to do that?
Clearly the fact that you have converted the first column to a factor indicates that you might need those factors in some place. so i would suggest that instead of changing any of the columns, add a third column to your data frame which will help you maintain the SalesItem relevant to that value. here are the steps for it :
> s1<-c('1008','1009','1012','1013','1016','1017','1018','1019','1054','1055')
> s2<-c(155,153,154,150,176,165,159,143,179,150)
> s1 = as.integer(s1)
> s3 = ifelse((s1-1) %in% s1, s1-1, s1)
> S <- data.frame(SalesItem=s1, Sales=s2, ItemId=s3)
then you can just count on the basis of the ItemId column.
This is not a terribly efficient solution, but since your data only contains 10000 records, it is not going to be a big problem.
Set up provided example data, but convert the SalesItem field to an integer so that the diff() operation makes sense.
> s1<-c('1008','1009','1012','1013','1016','1017','1018','1019','1054','1055')
> s2<-c(155,153,154,150,176,165,159,143,179,150)
> s1 = as.integer(s1)
> S<-data.frame(SalesItem=s1, Sales=s2)
Reorder data frame so that the SalesItem field is in ascending order (not necessary for current data set, but required for solution) then find the differences.
> S = S[order(S$SalesItem),]
> d = c(0, diff(S$SalesItem))
Duplicate the SalesItem data and then filter based on the values of the differences.
> labels = s1
> #
> for (n in 1:nrow(S)) {if (d[n] == 1) labels[n] = labels[n-1]}
> S$labels = labels
The (temporary) labels field now has the required new values for the SalesItem field. Once you are happy that this is doing the right thing, you can modify last line in above code to simply over-write the existing SalesItem field.
> S
SalesItem Sales labels
1 1008 155 1008
2 1009 153 1008
3 1012 154 1012
4 1013 150 1012
5 1016 176 1016
6 1017 165 1016
7 1018 159 1016
8 1019 143 1016
9 1054 179 1054
10 1055 150 1054