I am new to R.
Currently I am working on the raw data. It includes thousands of code. I need to extract the code and number separately into small columns.
I have the data as below
df <- data.frame(num = 1:3, CD = c("1999HZ0BT", "1998HQ1ML", "1964MN3JK"))
Output I am wishing to have
df2 <- data.frame(num = 1:3, NUMBER = c(1999, 1998, 1964), VER = c(0,1,3), CD = c("HZBT", "HQML", "MNJK"))
Thank you for your help!
You could use regular expressions and Map to apply them consecutively.
res <- setNames(data.frame(df$num,
Map(function(x, y) gsub(x, y, df$CD),
c("(\\d{4}).*", ".*\\w(\\d)\\w.*", "\\d"),
c("\\1", "\\1", ""))),
c("num", "NUMBER", "VER", "CD"))
res
# num NUMBER VER CD
# 1 1 1999 0 HZBT
# 2 2 1998 1 HQML
# 3 3 1964 3 MNJK
You can use extract from tidyr :
If you want to extract data based on position
library(tidyr)
df1 <- extract(df, CD, c('NUMBER', 'CD1', 'VER', 'CD2'), '(.{4})(..)(.)(..)')
Or if you want to extract data based on characters and numbers
df1 <- extract(df, CD, c('NUMBER', 'CD1', 'VER', 'CD2'),
'(\\d+)([A-Z]+)(\\d+)([A-Z]+)')
Both of the above returns
df1
# num NUMBER CD1 VER CD2
#1 1 1999 HZ 0 BT
#2 2 1998 HQ 1 ML
#3 3 1964 MN 3 JK
You can combine CD1 and CD2 using unite
unite(df1, CD, CD1, CD2, sep = "")
# num NUMBER CD VER
#1 1 1999 HZBT 0
#2 2 1998 HQML 1
#3 3 1964 MNJK 3
Use substr:
library(dplyr)
df %>%
mutate(NUMBER = substr(CD, 1, 4),
VER = substr(CD, 7, 7),
CD = paste(substr(CD, 5, 6), substr(CD, 8, 9), sep = ""))
num CD NUMBER VER
1 1 HZBT 1999 0
2 2 HQML 1998 1
3 3 MNJK 1964 3
Related
I am trying to find the first non-NA element of column w in each group and then construct a new variable which starts from the index of that non-NA element and follows this law of motion:
k_{it+1}=k_{it}+s_{it+1}-s{it}.
i denotes the group and t is time. k_{i1} comes from the first non-NA element of column w.
Let's say I have the following dataset:
DF <- data.frame("time"=factor(c(1999,2000,2001,2002,1999,2000,2001,2002)),
"i"=factor(c("a","a","a","a","b","b","b","b")),
"w"=c(NA,1,2,4,4,NA,3,4), "s"= c(10,20,10,22,45,30,20,40))
And I want to add a new column to it:
DF$k <- c(NA, 1, -9, 3, 4, -11, -21, -1)
We can write a function to calculate values using the formula :
library(dplyr)
apply_fun <- function(x,y){
inds <- which.max(!is.na(x))
vals <-rep(NA, length(x))
c(rep(NA, inds - 1), Reduce(`+`, y[(inds+1):length(y)] - y[inds:(length(y) - 1)],
accumulate = TRUE, init = x[inds]))
}
and then apply it by group
DF %>%
group_by(i) %>%
mutate(k = apply_fun(w, s))
# time i w s k
# <fct> <fct> <dbl> <dbl> <dbl>
#1 1999 a NA 10 NA
#2 2000 a 1 20 1
#3 2001 a 2 10 -9
#4 2002 a 4 22 3
#5 1999 b 4 45 4
#6 2000 b NA 30 -11
#7 2001 b 3 20 -21
#8 2002 b 4 40 -1
The following code works, however, I had to use for which I don't think would be fast enough for a big dataset:
apply_fun <- function(x,y){
inds <- which.max(!is.na(x))
vals <-rep(NA, length(x))
vals[inds]<-x[inds]
for (i in (inds+1):length(x)){
vals[i] <- vals[i-1]+y[i]-y[i-1]
}
vals
}
DF %>%
group_by(i) %>%
mutate(k = apply_fun(w, s))
I need to reorganize my dataframe so that I can run Krippendorff's alpha. What function/rudimentary solution can I find?
Here's what my dataframe looks like:
That is, each participant has 7 rows (for 7 observations). Each observation was assessed by two different people. I'd like my dataframe to have three columns: Code, Transcriber1, Transcriber 2. Under "Transcriber1" would appear the error scores of the first transcriber, whatever the name is, and under "Transcriber2", the scores for the second. That is, I'd like it to look like this:
Any thoughts? Any help will be very much appreciated!
Thanks community!
1) dplyr/tidyr Assuming input DF is as in the Note at the end create a Transcriber column with values Transcriber1 and Transcriber2 and a Seq column with sequence numbers and finally use spread to convert to wide form.
library(dplyr)
library(tidyr)
DF %>%
group_by(Code) %>%
mutate(Transcriber = as.numeric(factor(Transcriber, levels = unique(Transcriber)))) %>%
group_by(Transcriber = paste0("Transcriber", Transcriber), add = TRUE) %>%
mutate(Seq = seq_along(Errors)) %>%
ungroup %>%
spread(Transcriber, Errors) %>%
select(-Seq)
giving:
# A tibble: 14 x 3
Code Transcriber1 Transcriber2
<dbl> <int> <int>
1 1011 1 8
2 1011 2 9
3 1011 3 10
4 1011 4 11
5 1011 5 12
6 1011 6 13
7 1011 7 14
8 2011 15 22
9 2011 16 23
10 2011 17 24
11 2011 18 25
12 2011 19 26
13 2011 20 27
14 2011 21 28
2) Base R A solution using only base R would be:
make_factor <- function(x) factor(x, levels = unique(x))
DF2 <- transform(DF,
Transcriber = paste0("Transcriber", ave(as.numeric(Transcriber), Code, FUN = make_factor)),
Seq = ave(Errors, Code, Transcriber, FUN = seq_along))
r <- reshape(DF2, dir = "wide", idvar = c("Seq", "Code"), timevar = "Transcriber")[-2]
names(r) <- sub("Errors.", "", names(r))
Note
The input in reproducible form is assumed to be:
DF <- data.frame(Code = rep(c(1011, 2011), each = 14),
Transcriber = rep(c("Anna", "David", "Susan", "Anna"), each = 7),
Errors = 1:28)
I have a question that I find kind of hard to explain with a MRE and in an easy
way to answer, mostly because I don't fully understand where the problem lies
myself. So that's my sorry for being vague preamble.
I have a tibble with many sample and reference measurements, for which I want
to do some linear interpolation for each sample. I do this now by taking out
all the reference measurements, rescaling them to sample measurements using
approx, and then patching it back in. But because I take it out first, I
cannot do it nicely in a group_by dplyr pipe way. right now I do it with a
really ugly workaround where I add empty (NA) newly created columns to the
sample tibble, then do it with a for-loop.
So my question is really: how can I implement the approx part within groups
into the pipe, so that I can do everything within groups? I've experimented
with dplyr::do(), and ran into the vignette on "programming with dplyr", but
searching mostly gives me broom::augment and lm stuff that I think operates
differently... (e.g. see
Using approx() with groups in dplyr). This thread also seems promising: How do you use approx() inside of mutate_at()?
Somebody on irc recommended using a conditional mutate, with case_when, but I
don't fully understand where and how within this context yet.
I think the problem lies in the fact that I want to filter out part of the data
for the following mutate operations, but the mutate operations rely on the
grouped data that I just filtered out, if that makes any sense.
Here's a MWE:
library(tidyverse) # or just dplyr, tibble
# create fake data
data <- data.frame(
# in reality a dttm with the measurement time
timestamp = c(rep("a", 7), rep("b", 7), rep("c", 7)),
# measurement cycle, normally 40 for sample, 41 for reference
cycle = rep(c(rep(1:3, 2), 4), 3),
# wheather the measurement is a reference or a sample
isref = rep(c(rep(FALSE, 3), rep(TRUE, 4)), 3),
# measurement intensity for mass 44
r44 = c(28:26, 30:26, 36, 33, 31, 38, 34, 33, 31, 18, 16, 15, 19, 18, 17)) %>%
# measurement intensity for mass 45, normally also masses up to mass 49
mutate(r45 = r44 + rnorm(21, 20))
# of course this could be tidied up to "intensity" with a new column "mass"
# (44, 45, ...), but that would make making comparisons even harder...
# overview plot
data %>%
ggplot(aes(x = cycle, y = r44, colour = isref)) +
geom_line() +
geom_line(aes(y = r45), linetype = 2) +
geom_point() +
geom_point(aes(y = r45), shape = 1) +
facet_grid(~ timestamp)
# what I would like to do
data %>%
group_by(timestamp) %>%
do(target_cycle = approx(x = data %>% filter(isref) %>% pull(r44),
y = data %>% filter(isref) %>% pull(cycle),
xout = data %>% filter(!isref) %>% pull(r44))$y) %>%
unnest()
# immediately append this new column to the original dataframe for all the
# samples (!isref) and then apply another approx for those values.
# here's my current attempt for one of the timestamps
matchref <- function(dat) {
# split the data into sample gas and reference gas
ref <- filter(dat, isref)
smp <- filter(dat, !isref)
# calculate the "target cycle", the points at which the reference intensity
# 44 matches the sample intensity 44 with linear interpolation
target_cycle <- approx(x = ref$r44,
y = ref$cycle, xout = smp$r44)
# append the target cycle to the sample gas
smp <- smp %>%
group_by(timestamp) %>%
mutate(target = target_cycle$y)
# linearly interpolate each reference gas to the target cycle
ref <- ref %>%
group_by(timestamp) %>%
# this is needed because the reference has one more cycle
mutate(target = c(target_cycle$y, NA)) %>%
# filter out all the failed ones (no interpolation possible)
filter(!is.na(target)) %>%
# calculate interpolated value based on r44 interpolation (i.e., don't
# actually interpolate this value but shift it based on the 44
# interpolation)
mutate(r44 = approx(x = cycle, y = r44, xout = target)$y,
r45 = approx(x = cycle, y = r45, xout = target)$y) %>%
select(timestamp, target, r44:r45)
# add new reference gas intensities to the correct sample gasses by the target cycle
left_join(smp, ref, by = c("time", "target"))
}
matchref(data)
# and because now "target" must be length 3 (the group size) or one, not 9
# I have to create this ugly for-loop
# for which I create a copy of data that has the new columns to be created
mr <- data %>%
# filter the sample gasses (since we convert ref to sample)
filter(!isref) %>%
# add empty new columns
mutate(target = NA, r44 = NA, r45 = NA)
# apply matchref for each group timestamp
for (grp in unique(data$timestamp)) {
mr[mr$timestamp == grp, ] <- matchref(data %>% filter(timestamp == grp))
}
Here's one approach that spreads the references and samples to new columns. I drop r45 for simplicity in this example.
data %>%
select(-r45) %>%
mutate(isref = ifelse(isref, "REF", "SAMP")) %>%
spread(isref, r44) %>%
group_by(timestamp) %>%
mutate(target_cycle = approx(x = REF, y = cycle, xout = SAMP)$y) %>%
ungroup
gives,
# timestamp cycle REF SAMP target_cycle
# <fct> <dbl> <dbl> <dbl> <dbl>
# 1 a 1 30 28 3
# 2 a 2 29 27 4
# 3 a 3 28 26 NA
# 4 a 4 27 NA NA
# 5 b 1 31 26 NA
# 6 b 2 38 36 2.5
# 7 b 3 34 33 4
# 8 b 4 33 NA NA
# 9 c 1 15 31 NA
# 10 c 2 19 18 3
# 11 c 3 18 16 2.5
# 12 c 4 17 NA NA
Edit to address comment below
To retain r45 you can use a gather-unite-spread approach like this:
df %>%
mutate(isref = ifelse(isref, "REF", "SAMP")) %>%
gather(r, value, r44:r45) %>%
unite(ru, r, isref, sep = "_") %>%
spread(ru, value) %>%
group_by(timestamp) %>%
mutate(target_cycle_r44 = approx(x = r44_REF, y = cycle, xout = r44_SAMP)$y) %>%
ungroup
giving,
# # A tibble: 12 x 7
# timestamp cycle r44_REF r44_SAMP r45_REF r45_SAMP target_cycle_r44
# <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 a 1 30 28 49.5 47.2 3
# 2 a 2 29 27 48.8 48.7 4
# 3 a 3 28 26 47.2 46.8 NA
# 4 a 4 27 NA 47.9 NA NA
# 5 b 1 31 26 51.4 45.7 NA
# 6 b 2 38 36 57.5 55.9 2.5
# 7 b 3 34 33 54.3 52.4 4
# 8 b 4 33 NA 52.0 NA NA
# 9 c 1 15 31 36.0 51.7 NA
# 10 c 2 19 18 39.1 37.9 3
# 11 c 3 18 16 39.2 35.3 2.5
# 12 c 4 17 NA 39.0 NA NA
I'd like to expand observations from single row-per-id to multiple rows-per-id based on a given time interval:
> dput(df)
structure(list(id = c(123, 456, 789), gender = c(0, 1, 1), yr.start = c(2005,
2010, 2000), yr.last = c(2007, 2012, 2000)), .Names = c("id",
"gender", "yr.start", "yr.last"), class = c("tbl_df", "tbl",
"data.frame"), row.names = c(NA, -3L))
> df
# A tibble: 3 x 4
id gender yr.start yr.last
<dbl> <dbl> <dbl> <dbl>
1 123 0 2005 2007
2 456 1 2010 2012
3 789 1 2000 2000
I want to get id expanded into one row per year:
> dput(df_out)
structure(list(id = c(123, 123, 123, 456, 456, 456, 789), gender = c(0,
0, 0, 1, 1, 1, 1), yr = c(2005, 2006, 2007, 2010, 2011, 2012,
2000)), .Names = c("id", "gender", "yr"), class = c("tbl_df",
"tbl", "data.frame"), row.names = c(NA, -7L))
> df_out
# A tibble: 7 x 3
id gender yr
<dbl> <dbl> <dbl>
1 123 0 2005
2 123 0 2006
3 123 0 2007
4 456 1 2010
5 456 1 2011
6 456 1 2012
7 789 1 2000
I know how to melt/reshape, but I'm not sure how I can expand the years.
Thanks.
Here is a base R method.
# expand years to a list
yearList <- mapply(":", df$yr.start, df$yr.last)
Now, use this list to calculate the number of rows to repeat for each ID (the second argument of rep) and then append it as a vector (transformed from list with unlist) using cbind.
# get data.frame
cbind(df[rep(seq_along(df$id), lengths(yearList)), c("id", "gender")], yr=unlist(yearList))
id gender yr
1 123 0 2005
1.1 123 0 2006
1.2 123 0 2007
2 456 1 2010
2.1 456 1 2011
2.2 456 1 2012
3 789 1 2000
You could gather into long format and then fill in the missing rows via complete using tidyr.
library(dplyr)
library(tidyr)
df %>%
gather(group, yr, starts_with("yr") ) %>%
group_by(id, gender) %>%
complete(yr = full_seq(yr, period = 1) )
You can use select to get rid of the extra column.
df %>%
gather(group, yr, starts_with("yr") ) %>%
select(-group) %>%
group_by(id, gender) %>%
complete(yr = full_seq(yr, period = 1) )
# A tibble: 8 x 3
# Groups: id, gender [3]
id gender yr
<dbl> <dbl> <dbl>
1 123 0 2005
2 123 0 2006
3 123 0 2007
4 456 1 2010
5 456 1 2011
6 456 1 2012
7 789 1 2000
8 789 1 2000
Here is a tidyverse solution
library(tidyverse)
df %>%
group_by(id, gender) %>%
nest() %>%
mutate(data = map(data, ~ seq(.x$yr.start, .x$yr.last))) %>%
unnest() %>%
rename(year = data)
# A tibble: 7 x 3
id gender year
<dbl> <dbl> <int>
1 123 0 2005
2 123 0 2006
3 123 0 2007
4 456 1 2010
5 456 1 2011
6 456 1 2012
7 789 1 2000
As the OP mentions that his production data set has more than 1 M rows and he is benchmarking the different solutions, it might be worthwhile to try a data.table version:
library(data.table) # CRAN version 1.10.4 used
data.table(DF)[, .(yr = yr.start:yr.last), by = .(id, gender)]
which returns
id gender yr
1: 123 0 2005
2: 123 0 2006
3: 123 0 2007
4: 456 1 2010
5: 456 1 2011
6: 456 1 2012
7: 789 1 2000
If there are more non-varying columns than just gender it might be more efficient to do a join rather than including all those columns in the grouping parameter by =:
data.table(DF)[DF[, .(yr = yr.start:yr.last), by = id], on = "id"]
id gender yr.start yr.last yr
1: 123 0 2005 2007 2005
2: 123 0 2005 2007 2006
3: 123 0 2005 2007 2007
4: 456 1 2010 2012 2010
5: 456 1 2010 2012 2011
6: 456 1 2010 2012 2012
7: 789 1 2000 2000 2000
Note that both approaches assume that id is unique in the input data.
Benchmarking
The OP has noted that he is surprised that above data.table solution is five times slower than lmo's base R solution, apparently with OP's production data set of more than 1 M rows.
Also, the question has attracted 5 different answers plus additional suggestions. So, it's worthwhile to compare the solution in terms of processing speed.
Data
As the production data set isn't available, and problem size among other factors like the strcuture of the data is important for benchmarking, sample data sets are created.
# parameters
n_rows <- 1E2
yr_range <- 10L
start_yr <- seq(2000L, length.out = 10L, by = 1L)
# create sample data set
set.seed(123L)
library(data.table)
DT <- data.table(id = seq_len(n_rows),
gender = sample(0:1, n_rows, replace = TRUE),
yr.start = sample(start_yr, n_rows, replace = TRUE))
DT[, yr.last := yr.start + sample(0:yr_range, n_rows, replace = TRUE)]
DF <- as.data.frame(DT)
str(DT)
Classes ‘data.table’ and 'data.frame': 100 obs. of 4 variables:
$ id : int 1 2 3 4 5 6 7 8 9 10 ...
$ gender : int 0 1 0 1 1 0 1 1 1 0 ...
$ yr.start: int 2005 2003 2004 2009 2004 2008 2009 2006 2004 2001 ...
$ yr.last : int 2007 2013 2010 2014 2008 2017 2013 2009 2005 2002 ...
- attr(*, ".internal.selfref")=<externalptr>
For the first run, 100 rows are created, the start year can vary between 2000 and 2009, and the span of years an indivdual id can cover is between 0 and 10 years. Thus, the result set should be expected to have approximately 100 * (10 + 1) / 2 rows.
Also, only one additional column gender is included although the OP has told that the producion data may have 2 to 10 non-varying columns.
Code
library(magrittr)
bm <- microbenchmark::microbenchmark(
lmo = {
yearList <- mapply(":", DF$yr.start, DF$yr.last)
res_lmo <- cbind(DF[rep(seq_along(DF$id), lengths(yearList)), c("id", "gender")],
yr=unlist(yearList))
},
hao = {
res_hao <- DF %>%
dplyr::group_by(id, gender) %>%
tidyr::nest() %>%
dplyr::mutate(data = purrr::map(data, ~ seq(.x$yr.start, .x$yr.last))) %>%
tidyr::unnest() %>%
dplyr::rename(yr = data)
},
aosmith = {
res_aosmith <- DF %>%
tidyr::gather(group, yr, dplyr::starts_with("yr") ) %>%
dplyr::select(-group) %>%
dplyr::group_by(id, gender) %>%
tidyr::complete(yr = tidyr::full_seq(yr, period = 1) )
},
jason = {
res_jason <- DF %>%
dplyr::group_by(id, gender) %>%
dplyr::do(data.frame(yr=.$yr.start:.$yr.last))
},
uwe1 = {
res_uwe1 <- DT[, .(yr = yr.start:yr.last), by = .(id, gender)]
},
uwe2 = {
res_uwe2 <- DT[DT[, .(yr = yr.start:yr.last), by = id], on = "id"
][, c("yr.start", "yr.last") := NULL]
},
frank1 = {
res_frank1 <- DT[rep(1:.N, yr.last - yr.start + 1L),
.(id, gender, yr = DT[, unlist(mapply(":", yr.start, yr.last))])]
},
frank2 = {
res_frank2 <- DT[, {
m = mapply(":", yr.start, yr.last); c(.SD[rep(.I, lengths(m))], .(yr = unlist(m)))},
.SDcols=id:gender]
},
times = 3L
)
Note that references to tidyverse functions are explicit in order to avoid name conflicts due to a cluttered name space.
First run
Unit: microseconds
expr min lq mean median uq max neval
lmo 655.860 692.6740 968.749 729.488 1125.193 1520.899 3
hao 40610.776 41484.1220 41950.184 42357.468 42619.887 42882.307 3
aosmith 319715.984 336006.9255 371176.437 352297.867 396906.664 441515.461 3
jason 77525.784 78197.8795 78697.798 78869.975 79283.804 79697.634 3
uwe1 834.079 870.1375 894.869 906.196 925.264 944.332 3
uwe2 1796.910 1810.8810 1880.482 1824.852 1922.268 2019.684 3
frank1 981.712 1057.4170 1086.680 1133.122 1139.164 1145.205 3
frank2 994.172 1003.6115 1081.016 1013.051 1124.438 1235.825 3
For the given problem size of 100 rows, the timings clearly indicate that the dplyr/ tidyr solutions are magnitudes slower than base R or data.table solutions.
The results are essentially consistent:
all.equal(as.data.table(res_lmo), res_uwe1)
all.equal(res_hao, res_uwe1)
all.equal(res_jason, res_uwe1)
all.equal(res_uwe2, res_uwe1)
all.equal(res_frank1, res_uwe1)
all.equal(res_frank2, res_uwe1)
return TRUE except all.equal(res_aosmith, res_uwe1) which returns
[1] "Incompatible type for column yr: x numeric, y integer"
Second run
Due to the long execution times, the tidyverse solutions are skipped when benchmarking larger problem sizes.
With the modified parameters
n_rows <- 1E4
yr_range <- 100L
the result set is expected to consist of about 500'000 rows.
Unit: milliseconds
expr min lq mean median uq max neval
lmo 425.026101 447.716671 455.85324 470.40724 471.26681 472.12637 3
uwe1 9.555455 9.796163 10.05562 10.03687 10.30571 10.57455 3
uwe2 18.711805 18.992726 19.40454 19.27365 19.75091 20.22817 3
frank1 22.639031 23.129131 23.58424 23.61923 24.05685 24.49447 3
frank2 13.989016 14.124945 14.47987 14.26088 14.72530 15.18973 3
For the given problem size and structure the data.table solutions are the fastest while the base R approach is a magnitude slower. The most concise solution uwe1 is also the fastest, here.
Note that the results depend on the structure of the data, in particular the parameters n_rows and yr_range and the number of non-varying columns. If there are more of those columns than just gender the timings might look differently.
The benchmark results are in contradiction to the OP's observation on execution speed which needs to be further investigated.
Another way using do in dplyr, but it's slower than the base R method.
df %>%
group_by(id, gender) %>%
do(data.frame(yr=.$yr.start:.$yr.last))
# # A tibble: 7 x 3
# # Groups: id, gender [3]
# id gender yr
# <dbl> <dbl> <int>
# 1 123 0 2005
# 2 123 0 2006
# 3 123 0 2007
# 4 456 1 2010
# 5 456 1 2011
# 6 456 1 2012
# 7 789 1 2000
I have this data
> dff_all[1:10,c(2,3)]
cet_hour_of_registration country_id
1 20 SE
2 12 SE
3 11 SE
4 15 GB
5 12 SE
6 14 BR
7 23 MX
8 13 SE
9 1 BR
10 9 SE
and I want to create a variable $hour with the local time. The conversations are as follows The changes from CET to local time is
FI+1. MX-7. UK-1. BR-5.
I tried to do it with a nested IF. Did not make it.
#Create a data lookup table
country_id <- c("FI", "MX", "UK", "BR", "SE")
time_diff <- c(1,-7,-1,-5, 0)
df <- data.frame(country_id, time_diff)
#this is a substitute data frame for your data.
hour_reg <- c(20,12,11,15,5)
dff_all <- data.frame(country_id, hour_reg)
#joing the tables with dplyr function -> or with base join (double check join type for your needs)
library(dplyr)
new_table <- join(dff_all, df)
#make new column
mutate(new_table, hour = hour_reg - time_diff)
#output
country_id hour_reg time_diff hour
1 FI 20 1 19
2 MX 12 -7 19
3 UK 11 -1 12
4 BR 15 -5 20
5 SE 5 0 5
Base package:
# A variation of the example provided by vinchinzu
# Original table
country_id <- c("FI", "MX", "UK", "BR", "SE", "SP", "RE")
hour_reg <- c(20, 12, 11, 15, 5, 3, 7)
df1 <- data.frame(country_id, hour_reg)
# Lookup table
country_id <- c("FI", "MX", "UK", "BR", "SE")
time_diff <- c(1, -7, -1, -5, 0)
df2 <- data.frame(country_id, time_diff)
# We merge them and calculate a new column
full <- merge(df1, df2, by = "country_id", all.x = TRUE)
full$hour <- full$hour - full$time_diff
full
Output, in case we do not have that country in the lookup table, we will get NA:
country_id hour_reg time_diff hour
1 BR 15 -5 20
2 FI 20 1 19
3 MX 12 -7 19
4 RE 7 NA NA
5 SE 5 0 5
6 SP 3 NA NA
7 UK 11 -1 12
If we would like to show all rows without NA:
full[complete.cases(full), ]
To replace NA for zeros:
full[is.na(full)] <- 0