Creating a column in R which is based in information in other columns - r

My data set consists of the columns CODE, FIRE, CARBON, and DEPTH
example of data in the column CODE are chic, chiq, lopc, lopq etc.
Besides this I got information in which is mentioned that each value, chic for example, in the column CODES belongs to either A, B or C.
So I want to create in R a new column which is based on the CODE column. So that when I say to R that when CODE is for example chic, the value in the new column is A on the same row.
I hope someone could help me with this,
Thanks in advance!
( I wanted to insert a column in here with an example of the data, but it didnt work, I hope it is now also clear what I want to create)

You could also use (assuming your data is a in a dataframe df)...
df$NEW[df$CODE == "chic"] <- "A"
df$NEW[df$CODE == "chiq"] <- "B"
df
# CODE DEPTH CARBON FIRE NEW
# 1 chic 0 24.2 U A
# 2 chic 10 25.6 U A
# 3 chiq 0 20.3 B B
# 4 chiq 0 50.3 B B
# 5 chiq 10 40.5 B B
# 6 chiq 20 24.2 B B
# 7 polc 0 60.5 U <NA>
# 8 polc 10 10.2 U <NA>
# 9 polq 0 20.5 U <NA>
etc...
To recode multiple values at once, use %in% and a vector for example:
df$NEW[df$CODE %in% c("chic", "chiq")] <- "A"

You can try :
yourdata$newcolumn<-sapply(as.character(yourdata$CODE),
switch,
"chic"="A","code2"="B","code3"="C")
Of course, you'll need to adapt the code to your data but you didn't give a lot of details...
Example :
with yourdata the below data.frame :
CODE FIRE CARBON DEPTH
1 chic 1 3 -1.3107045
2 code2 1 3 1.0781249
3 code3 1 3 -0.6762211
4 chic 1 3 0.9196376
5 code2 2 4 -0.7161380
6 code2 2 4 -0.5506980
7 chic 2 4 0.3323771
8 chic 2 4 -0.4131832
you get the new data.frame yourdata :
CODE FIRE CARBON DEPTH newcolumn
1 chic 1 3 -1.3107045 A
2 code2 1 3 1.0781249 B
3 code3 1 3 -0.6762211 C
4 chic 1 3 0.9196376 A
5 code2 2 4 -0.7161380 B
6 code2 2 4 -0.5506980 B
7 chic 2 4 0.3323771 A
8 chic 2 4 -0.4131832 A

Related

R - Skip columns in pmax command if they do not exist

I'd like to use the pmax command to create a new column. My code Looks like this:
Master <- Master %>%
mutate(RAM = pmax(RAM1, RAM2, RAM3, RAM4, RAM5, RAM6, RAM7, RAM8, RAM9, RAM10,
RAM11, RAM12, RAM13, RAM14, RAM15, RAM16, RAM17, RAM18,
RAM19, RAM20, RAM21, RAM22, RAM23, RAM24, RAM25, RAM26,
RAM27, RAM28, RAM29, RAM30, RAM31, RAM32, RAM33, RAM34,
RAM35, RAM36, RAM37, RAM38, RAM39, RAM40, RAM41, RAM42,
RAM43, RAM44, RAM45, RAM46, RAM47, RAM48, RAM49, RAM50,
RAM51, RAM52, RAM53, RAM54, RAM55, RAM56, RAM57, RAM58,
RAM59, RAM60, RAM61, RAM62, RAM63, RAM64, RAM65, RAM66,
RAM67, RAM68, RAM69, RAM70, RAM71, RAM72, RAM73, RAM74,
RAM75, RAM76, RAM77, RAM78, RAM79, RAM80, RAM81, RAM82,
RAM83, RAM84, RAM85, RAM86, RAM87, RAM88, RAM89, RAM90,
RAM91, RAM92, na.rm =T))
In my current data base, however, only the columns RAM1 to RAM8 exist. In this case, I want R to skip all the other columns mentioned in the Statement and to only use column RAM1 to RAM8 (it is okay if R displays an error message, but I don't want the program to interrupt running the code).
Any ideas how to do so?
Thanks!
One way to do this would be as follows:
Set up some data to make a reproducible example
set.seed(0)
Master <- data.frame(Other=100,RAM1=1:10, RAM2=1:10, RAM3=1:10, RAM4=1:10,
RAM5=1:10, RAM6=1:10, RAM7=1:10, RAM8=rnorm(10)+5)
Master[5,5] <- NA
Select required columns of the dataframe:
Master[colnames(Master) %in% paste0("RAM",1:92)]
Use do.call to run pmax using the selected columns as arguments, and adding the argument na.rm=TRUE
Master$RAM <- do.call(pmax, c(Master[colnames(Master) %in% paste0("RAM",1:92)], na.rm=TRUE))
Sample output:
Master
# Other RAM1 RAM2 RAM3 RAM4 RAM5 RAM6 RAM7 RAM8 RAM
#1 100 1 1 1 1 1 1 1 6.262954 6.262954
#2 100 2 2 2 2 2 2 2 4.673767 4.673767
#3 100 3 3 3 3 3 3 3 6.329799 6.329799
#4 100 4 4 4 4 4 4 4 6.272429 6.272429
#5 100 5 5 5 NA 5 5 5 5.414641 5.414641
#6 100 6 6 6 6 6 6 6 3.460050 6.000000
#7 100 7 7 7 7 7 7 7 4.071433 7.000000
#8 100 8 8 8 8 8 8 8 4.705280 8.000000
#9 100 9 9 9 9 9 9 9 4.994233 9.000000
#10 100 10 10 10 10 10 10 10 7.404653 10.000000

Frequency distribution using binCounts

I have a dataset of Ages for the customer and I wanted to make a frequency distribution by 9 years of a gap of age.
Ages=c(83,51,66,61,82,65,54,56,92,60,65,87,68,64,51,
70,75,66,74,68,44,55,78,69,98,67,82,77,79,62,38,88,76,99,
84,47,60,42,66,74,91,71,83,80,68,65,51,56,73,55)
My desired outcome would be similar to below-shared table, variable names can be differed(as you wish)
Could I use binCounts code into it ? if yes could you help me out using the code as not sure of bx and idxs in this code?
binCounts(x, idxs = NULL, bx, right = FALSE) ??
Age Count
38-46 3
47-55 7
56-64 7
65-73 14
74-82 10
83-91 6
92-100 3
Much Appreciated!
I don't know about the binCounts or even the package it is in but i have a bare r function:
data.frame(table(cut(Ages,0:7*9+37)))
Var1 Freq
1 (37,46] 3
2 (46,55] 7
3 (55,64] 7
4 (64,73] 14
5 (73,82] 10
6 (82,91] 6
7 (91,100] 3
To exactly duplicate your results:
lowerlimit=c(37,46,55,64,73,82,91,101)
Labels=paste(head(lowerlimit,-1)+1,lowerlimit[-1],sep="-")#I add one to have 38 47 etc
group=cut(Ages,lowerlimit,Labels)#Determine which group the ages belong to
tab=table(group)#Form a frequency table
as.data.frame(tab)# transform the table into a dataframe
group Freq
1 38-46 3
2 47-55 7
3 56-64 7
4 65-73 14
5 74-82 10
6 83-91 6
7 92-100 3
All this can be combined as:
data.frame(table(cut(Ages,s<-0:7*9+37,paste(head(s+1,-1),s[-1],sep="-"))))

Identifying maximum number and longest set of time intervals

Say I have data that look like this:
level start end
1 1 133.631 825.141
2 2 133.631 155.953
3 3 146.844 155.953
4 2 293.754 302.196
5 3 293.754 302.196
6 4 293.754 301.428
7 2 326.253 343.436
8 3 326.253 343.436
9 4 333.827 343.436
10 2 578.066 611.766
11 3 578.066 611.766
12 4 578.066 587.876
13 4 598.052 611.766
14 2 811.228 825.141
15 3 811.228 825.141
or this:
level start end
1 1 3.60353 1112.62000
2 2 3.60353 20.35330
3 3 3.60353 8.77526
4 2 72.03720 143.60700
5 3 73.50530 101.13200
6 4 73.50530 81.64660
7 4 92.19030 101.13200
8 3 121.28500 143.60700
9 4 121.28500 128.25900
10 2 167.19700 185.04800
11 3 167.19700 183.44600
12 4 167.19700 182.84600
13 2 398.12300 418.64300
14 3 398.12300 418.64300
15 2 445.83600 454.54500
16 2 776.59400 798.34800
17 3 776.59400 796.64700
18 4 776.59400 795.91300
19 2 906.68800 915.89700
20 3 906.68800 915.89700
21 2 1099.44000 1112.62000
22 3 1099.44000 1112.62000
23 4 1100.14000 1112.62000
They produce the following graphs:
As you can see there are several time intervals at different levels. The level-1 interval always spans the entire duration of the time of interest. Levels 2+ have time intervals that are shorter.
What I would like to do is select the maximum number of non-overlapping time intervals covering each period that contain the maximum number of total time within them. I have marked in pink which ones those would be.
For small dataframes it is possible to brute force this, but obviously there should be some more logical way of doing this. I'm interested in hearing some ideas about what I should try.
EDIT:
I think one thing that could help here is the column 'level'. The results come from Kleinberg's burst detection algorithm (package 'bursts'). You will note that the levels are hierarchically organized. Levels of the same number cannot overlap. However levels successively increasing e.g. 2,3,4 in successive rows can overlap.
In essence, I think the problem could be shortened to this. Take the levels produced, but remove level 1. This would be the vector for the 2nd example:
2 3 2 3 4 4 3 4 2 3 4 2 3 2 2 3 4 2 3 2 3 4
Then, look at the 2s... if there are fewer than or only one '3' then that 2 is the longest interval. But if there are two or more 3's between successive 2's, then those 3s should be counted. Do this iteratively for each level. I think that should work...?
e.g.
vec<-df$level %>% as.vector() %>% .[-1]
vec
#[1] 2 3 2 3 4 4 3 4 2 3 4 2 3 2 2 3 4 2 3 2 3 4
max(vec) #4
vec3<-vec #need to find two or more 4's between 3s
vec3[vec3==3]<-NA
names(vec3)<-cumsum(is.na(vec3))
0 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 8 8
2 NA 2 NA 4 4 NA 4 2 NA 4 2 NA 2 2 NA 4 2 NA 2 NA 4
vec3.res<-which(table(vec3,names(vec3))["4",]>1)
which(names(vec3)==names(vec3.res) & vec3==4) #5 6
The above identifies rows 5 and 6 (which equate to rows 6 and 7 in original df) as having two fours that lie between 3's. Perhaps something using this sort of approach might work?
OK here is a stab using your second data set to test. This might not be correct in all cases!!
library(data.table)
dat <- fread("data.csv")
dat[,use:="maybe"]
make.pass <- function(dat,low,high,the.level,use) {
check <- dat[(use!="no" & level > the.level)]
check[,contained.by.above:=(low<=start & end<=high)]
check[,consecutive.contained.by.above:=
(contained.by.above &
!is.na(shift(contained.by.above,1)) &
shift(contained.by.above,1)),by=level]
if(!any(check[,consecutive.contained.by.above])) {
#Cause a side effect where we've learned we don't care:
dat[check[(contained.by.above),rownum],use:="no"]
print(check)
return("yes")
} else {
return("no")
}
}
dat[,rownum:=.I]
dat[level==1,use:=make.pass(dat,start,end,level,use),by=rownum]
dat
dat[use=="maybe" & level==2,use:=make.pass(dat,start,end,level,use),by=rownum]
dat
dat[use=="maybe" & level==3,use:=make.pass(dat,start,end,level,use),by=rownum]
dat
#Finally correct for last level
dat[use=="maybe" & level==4,use:="yes"]
I wrote these last steps out so you can trace in your own interactive session to see what's happening (see the print to get an idea) but you can remove the print and also condense the last steps into something like lapply(1:dat[,max(level)-1], function(the.level) dat[use=="maybe" & level==the.level,use:=make.pass......]) In response to your comment if there are an arbitrary number of levels you will definitely want to use this formalism, and follow it with a final call to dat[use=="maybe" & level==max(level),use:="yes"].
Output:
> dat
level start end use rownum
1: 1 3.60353 1112.62000 no 1
2: 2 3.60353 20.35330 yes 2
3: 3 3.60353 8.77526 no 3
4: 2 72.03720 143.60700 no 4
5: 3 73.50530 101.13200 no 5
6: 4 73.50530 81.64660 yes 6
7: 4 92.19030 101.13200 yes 7
8: 3 121.28500 143.60700 yes 8
9: 4 121.28500 128.25900 no 9
10: 2 167.19700 185.04800 yes 10
11: 3 167.19700 183.44600 no 11
12: 4 167.19700 182.84600 no 12
13: 2 398.12300 418.64300 yes 13
14: 3 398.12300 418.64300 no 14
15: 2 445.83600 454.54500 yes 15
16: 2 776.59400 798.34800 yes 16
17: 3 776.59400 796.64700 no 17
18: 4 776.59400 795.91300 no 18
19: 2 906.68800 915.89700 yes 19
20: 3 906.68800 915.89700 no 20
21: 2 1099.44000 1112.62000 yes 21
22: 3 1099.44000 1112.62000 no 22
23: 4 1100.14000 1112.62000 no 23
level start end use rownum
On the off chance this is correct, the algorithm can roughly be described as follows:
Mark all the intervals as possible.
Start with a given level. Pick a particular interval (by=rownum) say called X. With X in mind, subset a copy of the data to all higher-level intervals.
Mark any of these that are contained in X as "contained in X".
If consecutive intervals at the same level are contained in X, X is no good b/c it wastes intervals. In this case label X's "use" variable as "no" so we'll never think about X again. [Note: if it's possible that non-consecutive intervals are contained in X, or that containing multiple intervals across levels could ruin X's viability, then this logic might need to be changed to count contained intervals instead of finding consecutive ones. I didn't think about this at all, but it's just occurring to me now, so use at your own risk.]
On the other hand, if X passed the test, then we've already established it's good. Mark it as a "yes." But importantly, we also have to mark any single interval contained in X as "no," or else when we iterate the step it will forget that it was contained inside a good interval and mark itself as "yes" as well. This is the side effect step.
Now, iterate, ignoring any results that we've already determined.
Finally any "maybe"s leftover at the highest level are automatically in.
Let me know what you think of this--this is a rough draft and some aspects might not be correct.

How does one ignore the "moral" column in R

moral verw ho dog
4 1049 1 2
4 2799 1 3
2 8412 4 4
4 2122 1 3
4 2171 1 3
4 2241 1 2
4 3398 1 4
I was normalizing a dataset using
noid = data.Normalization(newx,type="n4") but I want to ignore JUST the "moral" column but normalize everything else.
Any help will be greatly appreciated.
As David Arenburg suggested in his comment, if you don't pass the moral column to the normalisation routine it will obviously be ignored. If you want it to be included in the normalized dataset you can do something like this:
noid <- data.Normalization(newx[-1], type="n4")
noid <- cbind(moral=newx[1], noid)

How to put information obtained by cast function of reshape package back in my original data frame in R

I have a data.frame in panel format (country-year) and I need to calculate the mean of a variable by country and at each five years. So I just used the 'cast' function from 'reshape' package and it worked. Now I need to put this information(the mean by quinquennium) in the old data.frame, so I can run some regressions. How can I do that? Below I provide an example to ilustrate what I want:
set.seed(2)
fake= data.frame(y=rnorm(20), x=rnorm(20), country=rep(letters[1:2], each=10), year=rep(1:10,2), quinquenio= rep(rep(1:2, each=5),2))
fake.m = melt.data.frame(fake, id.vars=c("country", "year", "quinquenio"))
cast(fake.m, country ~ quinquenio, mean, subset=variable=="x", na.rm=T)
Now, everything is fine and I get what I wantted: the mean of x and y, by country and by quinquennial years. Now, I would like to put them back in the data.frame fake, like this:
y x country year quinquenio mean.x
1 -0.89691455 2.090819205 a 1 1 0.8880242
2 0.18484918 -1.199925820 a 2 1 0.8880242
3 1.58784533 1.589638200 a 3 1 0.8880242
4 -1.13037567 1.954651642 a 4 1 0.8880242
5 -0.08025176 0.004937777 a 5 1 0.8880242
6 0.13242028 -2.451706388 a 6 2 -0.2978375
7 0.70795473 0.477237303 a 7 2 -0.2978375
8 -0.23969802 -0.596558169 a 8 2 -0.2978375
9 1.98447394 0.792203270 a 9 2 -0.2978375
10 -0.13878701 0.289636710 a 10 2 -0.2978375
11 0.41765075 0.738938604 b 1 1 0.2146461
12 0.98175278 0.318960401 b 2 1 0.2146461
13 -0.39269536 1.076164354 b 3 1 0.2146461
14 -1.03966898 -0.284157720 b 4 1 0.2146461
15 1.78222896 -0.776675274 b 5 1 0.2146461
16 -2.31106908 -0.595660499 b 6 2 -0.8059598
17 0.87860458 -1.725979779 b 7 2 -0.8059598
18 0.03580672 -0.902584480 b 8 2 -0.8059598
19 1.01282869 -0.559061915 b 9 2 -0.8059598
20 0.43226515 -0.246512567 b 10 2 -0.8059598
I appreciate any tip in the right direction. Thanks in advance.
ps.: the reason I need this is that I'll run a regression with quinquennial data, and for some variables (like per capita income) I have information for all years, so I decided to average them by 5 years.
I'm sure there's an easy way to do this with reshape, but my brain defaults to plyr first:
require(plyr)
ddply(fake, c("country", "quinquenio"), transform, mean.x = mean(x))
This is quite hackish, but one way to use reshape building off your earlier work:
zz <- cast(fake.m, country ~ quinquenio, mean, subset=variable=="x", na.rm=T)
merge(fake, melt(zz), by = c("country", "quinquenio"))
though I'm positive there has to be a better solution.
Here's a more old school approach using tapply, ave, and with
fake$mean.x <- with(fake, unlist(tapply(x, list(country, quinquenio), ave)))

Resources