Losing time value in datetime stamp when import data from Access into R - r

I am importing data with a DateTime stamp from Access to R and continue to 'lose' my time value. I have had a similar issue a while back (posted right here) and I had to convert the times to a number before importing. While this was not too difficult, it is a step I would like to avoid. This post is also helpful and suggests the reason might be because of the large number or records. I am currently trying to import over 110k records.
As an FYI this post is very helpful for info on dealing with times in R, but did not provide a specific solution for this issue.
My data in Access (2013) looks like this.
As you can see I have a UTC and local time, both of which have the date and time in the same field.
I used the following code to read in the table and look at the head.
DataConnect <- odbcConnect("MstrMUP")
Temp <- sqlFetch(DataConnect, "TempData_3Nov2014")
head(Temp)
IndID UTCDateTime LocalDateTime Temp
1 MTG_030_A 2013-02-08 2013-02-08 25
2 MTG_030_A 2013-02-08 2013-02-08 26
3 MTG_030_A 2013-02-08 2013-02-08 31
4 MTG_030_A 2013-02-08 2013-02-08 29
5 MTG_030_A 2013-02-09 2013-02-08 39
6 MTG_030_A 2013-02-09 2013-02-08 44
As you can see, the time portion of the DateTime stamp is missing, and I can not seem to locate it using str or as.numeric, both of which suggest the time value is not stored (at least that is how I read it).
> str(Temp)
'data.frame': 110382 obs. of 4 variables:
$ IndID : Factor w/ 17 levels "BHS_034_A","BHS_035_A",..: 13 13 13 13 13 13 13 13 13 13 ...
$ UTCDateTime : POSIXct, format: "2013-02-08" "2013-02-08" ...
$ LocalDateTime: POSIXct, format: "2013-02-08" "2013-02-08" ...
$ Temp : int 25 26 31 29 39 44 42 49 42 38 ...
> head(as.numeric(MTG30$LocalDateTime))
[1] 1360306800 1360306800 1360306800 1360306800 1360306800 1360306800
Because all numeric values are the same, they must all be the same date, and do not include time. Correct...?
The Question:
Is this an R issue or Access? Any suggestions on how to import 110k rows of data from Access into R without losing the time portion of a DateTime stamp would be appreciated.
I am sure there is a better method than my earlier work around
oh, I almost forgot, I am running the "Sock it to Me" version of R.
EDIT/ADDITION In response to #Richard Scriven thoughts on unclass
Unfortunatly, no, there is not a sec, min, or time value. All are 0.
> temp <- Temp[1:5,]
> unclass(as.POSIXlt(temp$UTCDateTime))
$sec
[1] 0 0 0 0 0
$min
[1] 0 0 0 0 0
$hour
[1] 0 0 0 0 0
$mday
[1] 8 8 8 8 9
$mon
[1] 1 1 1 1 1
$year
[1] 113 113 113 113 113
$wday
[1] 5 5 5 5 6
$yday
[1] 38 38 38 38 39
$isdst
[1] 0 0 0 0 0
$zone
[1] "MST" "MST" "MST" "MST" "MST"
$gmtoff
[1] -25200 -25200 -25200 -25200 -25200
attr(,"tzone")
[1] "" "MST" "MDT"
Thanks in advance.

Related

Merge two data frames in R by the closest default options, not by excess

i have two dataframes:
df_bestquotes
df_transactions
df_transactions:
day time vol price buy ask bid
1 43688,08 100 195,8 1 195,8 195,74
1 56357,34 20 192,87 1 192,87 192,86
1 57576,14 14 192,48 -1 192,48 192,46
2 50468,29 3 193,83 1 193,86 193,77
2 56107,54 11 194,17 -1 194,2 194,16
7 42549,66 100 188,81 -1 188,85 188,78
7 42724,38 200 188,62 -1 188,66 188,61
7 48924,66 5 189,59 -1 189,62 189,59
8 48950,14 52 187,66 -1 187,7 187,66
9 36242,86 89 186,61 1 186,62 186,56
9 53910,46 1 189,81 -1 189,87 189,81
10 47041,94 15 187,87 -1 187,88 187,86
13 34380,73 87 187,29 -1 187,42 187,27
13 36037,18 100 188,94 1 188,95 188,94
14 46644,64 100 189,29 -1 189,34 189,29
14 57571,12 52 190,03 1 190,03 190
15 36418,71 45 192,07 1 192,07 192,04
15 37223,77 100 191,09 -1 191,07 191,06
17 37245,59 100 186,45 -1 186,47 186,45
23 34200,39 50 189,29 -1 189,29 189,27
24 40294,73 60 193,52 -1 193,54 193,5
29 52813,68 5 202,99 -1 203,01 202,99
29 55279,13 93 203,97 -1 203,98 203,9
30 51356,91 68 204,41 -1 204,45 204,4
30 53530,24 40 204,14 -1 204,18 204,14
df_bestquotes:
day time best_ask best_bid
1 51384,613 31,78 31,75
1 56593,74 31,6 31,55
3 40568,217 31,36 31,32
7 39169,237 31,34 31,28
8 44715,713 31,2 31,17
8 53730,707 31,24 31,19
8 55851,75 31,17 31,14
10 49376,267 31,06 30,99
16 48610,483 30,75 30,66
16 57360,917 30,66 30,64
17 53130,717 30,39 30,32
20 46353,133 30,72 30,63
23 46429,67 29,7 29,64
24 37627,727 29,81 29,63
24 46354,647 29,92 29,77
24 53863,69 30,04 29,93
24 53889,923 30,03 29,95
24 59047,223 29,99 29,2
28 39086,407 30,87 30,83
28 41828,703 30,87 30,8
28 50489,367 30,99 30,87
29 54264,467 30,97 30,85
30 34365,95 31,21 30,99
30 39844,357 31,06 31
30 57550,523 31,18 31,15
For each record of the df_transactions, from the day and time, I need to find the best_ask and the best_bid that was just before that moment, and incorporate this information to df_transactions.
df_joined: df_transactions + df_bestquotes
day time vol price buy ask bid best_ask best_bid
1 43688,08 100 195,8 1 195,8 195,74
1 56357,34 20 192,87 1 192,87 192,86
1 57576,14 14 192,48 -1 192,48 192,46
2 50468,29 3 193,83 1 193,86 193,77
2 56107,54 11 194,17 -1 194,2 194,16
7 42549,66 100 188,81 -1 188,85 188,78
7 42724,38 200 188,62 -1 188,66 188,61
7 48924,66 5 189,59 -1 189,62 189,59
8 48950,14 52 187,66 -1 187,7 187,66
9 36242,86 89 186,61 1 186,62 186,56
9 53910,46 1 189,81 -1 189,87 189,81
10 47041,94 15 187,87 -1 187,88 187,86
13 34380,73 87 187,29 -1 187,42 187,27
13 36037,18 100 188,94 1 188,95 188,94
14 46644,64 100 189,29 -1 189,34 189,29
14 57571,12 52 190,03 1 190,03 190
15 36418,71 45 192,07 1 192,07 192,04
15 37223,77 100 191,09 -1 191,07 191,06
17 37245,59 100 186,45 -1 186,47 186,45
23 34200,39 50 189,29 -1 189,29 189,27
24 40294,73 60 193,52 -1 193,54 193,5
29 52813,68 5 202,99 -1 203,01 202,99
29 55279,13 93 203,97 -1 203,98 203,9
30 51356,91 68 204,41 -1 204,45 204,4
30 53530,24 40 204,14 -1 204,18 204,14
I have tried with the next code, but it doesn't work:
library(data.table)
df_joined = df_bestquotes[df_transactions, on="time", roll = "nearest"]
Here are the real files with a lot more records, the ones I put before are an example of only 25 records.
df_transactions_original
df_bestquotes_original
And my code in R:
matching.R
Any suggestions on how to get it? Thanks a lot, guys.
The attempt you made uses data.table but you don't refer to data.table. Have you done library(data.table) before ?
I think it should rather be :
df_joined = df_bestquotes[df_transactions, on=.(day, time), roll = TRUE]
But I cannot test without the objects. Does it work ? roll="nearest" doesn't give you the previous best quotes but the nearest.
EDIT : Thanks for the objects, I checked, that works for me :
library(data.table)
dfb <- fread("df_bestquotes.csv", dec=",")
dft <- fread("df_transactions.csv", dec = ",")
dfb[, c("day2", "time2") := .(day,time)] # duplicated to keep track of the best quotes days
joinedDf <- dfb [dft, on=.(day, time), roll = +Inf]
It puts NA when there is no best quotes for the day. If you want to roll across days, I suggest you create a unique measure of time. I don't know exactly what time is. Considering the units of time is seconds :
dfb[, uniqueTime := day + time/(60*60*24)]
dft[, uniqueTime := day + time/(60*60*24)]
joinedDf <- dfb [dft, on=.(uniqueTime), roll = +Inf]
This works even if time is not seconds, only the ranking is important in this case.
Good morning #samuelallain, yes, I have used library(data.table) before.
I've edited it in the main commentary.
I have tried its solution and RStudio returns the following error:
library(data.table)
df_joined = df_bestquotes[df_transactions, on=.("day", "time"), roll = TRUE]
Error in [.data.frame(df_bestquotes, df_transactions, on = .(day, time), :
unused arguments (on = .("day", "time"), roll = TRUE)
Thank you.

Observations becoming NA when ordering levels of factors in R with ordered()

Hi have a longitudinal data frame p that contains 4 variables and looks like this:
> head(p)
date.1 County.x providers beds price
1 Jan/2011 essex 258 5545 251593.4
2 Jan/2011 greater manchester 108 3259 152987.7
3 Jan/2011 kent 301 7191 231985.7
4 Jan/2011 tyne and wear 103 2649 143196.6
5 Jan/2011 west midlands 262 6819 149323.9
6 Jan/2012 essex 2 27 231398.5
The structure of my variables is the following:
'data.frame': 259 obs. of 5 variables:
$ date.1 : Factor w/ 66 levels "Apr/2011","Apr/2012",..: 23 23 23 23 23 24 24 24 25 25 ...
$ County.x : Factor w/ 73 levels "avon","bedfordshire",..: 22 24 32 65 67 22 32 67 22 32 ...
$ providers: int 258 108 301 103 262 2 9 2 1 1 ...
$ beds : int 5545 3259 7191 2649 6819 27 185 24 70 13 ...
$ price : num 251593 152988 231986 143197 149324 ...
I want to order date.1 chronologically. Prior to apply ordered(), this variable does not contain NA observations.
> summary(is.na(p$date.1))
Mode FALSE NA's
logical 259 0
However, once I apply my function for ordering the levels corresponding to date.1:
p$date.1 = with(p, ordered(date.1, levels = c("Jun/2010", "Jul/2010",
"Aug/2010", "Sep/2010", "Oct/2010", "Nov/2010", "Dec/2010", "Jan/2011", "Feb/2011",
"Mar/2011","Apr/2011", "May/2011", "Jun/2011", "Jul/2011", "Aug/2011", "Sep/2011",
"Oct/2011", "Nov/2011", "Dec/2011" ,"Jan/2012", "Feb/2012" ,"Mar/2012" ,"Apr/2012",
"May/2012", "Jun/2012", "Jul/2012", "Aug/2012", "Sep/2012", "Oct/2012", "Nov/2012",
"Dec/2012", "Jan/2013", "Feb/2013", "Mar/2013", "Apr/2013", "May/2013",
"Jun/2013", "Jul/2013", "Aug/2013", "Sep/2013", "Oct/2013", "Nov/2013",
"Dec/2013", "Jan/2014",
"Feb/2014", "Mar/2014", "Apr/2014", "May/2014", "Jun/2014", "Jul/2014" ,"Aug/2014",
"Sep/2014", "Oct/2014", "Nov/2014", "Dec/2014", "Jan/2015", "Feb/2015", "Mar/2015",
"Apr/2015","May/2015", "Jun/2015" ,"Jul/2015" ,"Aug/2015", "Sep/2015", "Oct/2015",
"Nov/2015")))
It seems I miss some observations.
> summary(is.na(p$date.1))
Mode FALSE TRUE NA's
logical 250 9 0
Has anyone come across with this problem when using ordered()? or alternatively, is there any other possible solution to group my observations chronologically?
It is possible that one of your p$date.1 doesn't matched to any of the levels. Try this ord.monas the levels.
ord.mon <- do.call(paste, c(expand.grid(month.abb, 2010:2015), sep = "/"))
Then, you can try this to see if there's any mismatch between the two.
p$date.1 %in% ord.mon
Last, You can also sort the data frame after transforming the date.1 columng into Date (Note that you have to add an actual date beforehand)
p <- p[order(as.Date(paste0("01/", p$date.1), "%d/%b/%Y")), ]

R - Data Frame is a list of columns?

Question
Is a data frame in R is a list (list is, in my understanding, a sequence of objects) of columns?
What is the design decision in R to have made a data frame a column-oriented (not row-oriented) structure?
Any reference to related design document or article of data structure design would be appreciated.
I am just used to row-as-a-unit/record and would like to know why it is column oriented. Or if I misunderstood something, kindly suggest.
Background
I had thought a dataframe was a sequence of row, such as (Ozone, Solar.R, Wind, Temp, Month, Day).
> c ## data frame created from read.csv()
Ozone Solar.R Wind Temp Month Day
1 41 190 7.4 67 5 1
2 36 118 8.0 72 5 2
3 12 149 12.6 74 5 3
4 18 313 11.5 62 5 4
7 23 299 8.6 65 5 7
8 19 99 13.8 59 5 8
> typeof(c)
[1] "list"
However when lapply() is applied against c to show each list element, it was a column.
> lapply(c, function(arg){ return(arg) })
$Ozone
[1] 41 36 12 18 23 19
$Solar.R
[1] 190 118 149 313 299 99
$Wind
[1] 7.4 8.0 12.6 11.5 8.6 13.8
$Temp
[1] 67 72 74 62 65 59
$Month
[1] 5 5 5 5 5 5
$Day
[1] 1 2 3 4 7 8
Whereas I had expected was
[1] 41 190 7.4 67 5 1
[1] 36 118 8.0 72 5 2
…
1) Is a data frame in R a list of columns?
Yes.
df <- data.frame(a=c("the", "quick"), b=c("brown", "fox"), c=1:2)
is.list(df) # -> TRUE
attr(df, "name") # -> [1] "a" "b" "c"
df[[1]][2] # -> "quick"
2) What is the design decision in R to have made a data frame a column-oriented (not row-oriented) structure?
A data.frame is a list of column vectors.
is.atomic(df[[1]]) # -> TRUE
mode(df[[1]]) # -> [1] "character"
mode(df[[3]]) # -> [1] "numeric"
Vectors can only store one kind of object. A "row-oriented" data.frame would demand data frames be composed of lists instead. Now imagine what the performance of an operation like
df[[1]][20000]
would be in a list-based data frame keeping in mind that random access is O(1) for vectors and O(n) for lists.
3) Any reference to related design document or article of data structure design would be appreciated.
http://adv-r.had.co.nz/Data-structures.html#data-frames

R time series data manipulation with different data length_extract variable

I need some suggestions how to better design my problem’s resolution.
I starting from many Csv file of result of parametric study (time series data). I want to analyze the influence of some parameters on variable. The idea is to extract some variable from table of result for each id of parametric study and create a data.frame for each variable to easily make some plot and some analysis.
The problem is that some parameters change the time step of parametric study, so there are some csv much longer. One variable for example is Temperature. It is possible to maintain the differences on time step and evaluate Delta T varying one parameter? Plyr can do that? Or I have to resample part of my result to make this evaluation losing part of information?
I achieve to this point at moment:
head(data, 5)
names Date.Time Tout.dry.bulb RHout TsupIn TsupOut QconvIn[Wm2]
1 G_0-T_0-W_0-P1_0-P2_0 2005-01-01 00:03:00 0 50 23 15.84257 -1.090683e-14
2 G_0-T_0-W_0-P1_0-P2_0 2005-01-01 00:06:00 0 50 23 16.66988 0.000000e+00
3 G_0-T_0-W_0-P1_0-P2_0 2005-01-01 00:09:00 0 50 23 13.83446 1.090683e-14
4 G_0-T_0-W_0-P1_0-P2_0 2005-01-01 00:12:00 0 50 23 14.34774 2.181366e-14
5 G_0-T_0-W_0-P1_0-P2_0 2005-01-01 00:15:00 0 50 23 12.59164 2.181366e-14
QconvOut[Wm2] Hvout[Wm2K] Qradout[Wm2] MeanRadTin MeanAirTin MeanOperTin
1 0.0000 17.76 -5.428583e-08 23 23 23
2 -281.3640 17.76 -1.151613e-07 23 23 23
3 -296.0570 17.76 -1.018871e-07 23 23 23
4 -245.7001 17.76 -1.027338e-07 23 23 23
5 -254.8158 17.76 -9.458750e-08 23 23 23
> str(data)
'data.frame': 1858080 obs. of 13 variables:
$ names : Factor w/ 35 levels "G_0-T_0-W_0-P1_0-P2_0",..: 1 1 1 1 1 1 1 1 1 1 ...
$ Date.Time : POSIXct, format: "2005-01-01 00:03:00" "2005-01-01 00:06:00" "2005-01-01 00:09:00" ...
$ Tout.dry.bulb: num 0 0 0 0 0 0 0 0 0 0 ...
$ RHout : num 50 50 50 50 50 50 50 50 50 50 ...
$ TsupIn : num 23 23 23 23 23 23 23 23 23 23 ...
$ TsupOut : num 15.8 16.7 13.8 14.3 12.6 ...
$ QconvIn[Wm2] : num -1.09e-14 0.00 1.09e-14 2.18e-14 2.18e-14 ...
$ QconvOut[Wm2]: num 0 -281 -296 -246 -255 ...
$ Hvout[Wm2K] : num 17.8 17.8 17.8 17.8 17.8 ...
$ Qradout[Wm2] : num -5.43e-08 -1.15e-07 -1.02e-07 -1.03e-07 -9.46e-08 ...
$ MeanRadTin : num 23 23 23 23 23 23 23 23 23 23 ...
$ MeanAirTin : num 23 23 23 23 23 23 23 23 23 23 ...
$ MeanOperTin : num 23 23 23 23 23 23 23 23 23 23 ...
names(DF)
[1] "G_0-T_0-W_0-P1_0-P2_0" "G_0-T_0-W_0-P1_0-P2_1" "G_0-T_0-W_0-P1_0-P2_2"
[4] "G_0-T_0-W_0-P1_0-P2_3" "G_0-T_0-W_0-P1_0-P2_4" "G_0-T_0-W_0-P1_0-P2_5"
[7] "G_0-T_0-W_0-P1_0-P2_6" "G_0-T_0-W_0-P1_1-P2_0" "G_0-T_0-W_0-P1_1-P2_1"
[10] "G_0-T_0-W_0-P1_1-P2_2" "G_0-T_0-W_0-P1_1-P2_3" "G_0-T_0-W_0-P1_1-P2_4"
[13] "G_0-T_0-W_0-P1_1-P2_5" "G_0-T_0-W_0-P1_1-P2_6" "G_0-T_0-W_0-P1_2-P2_0"
[16] "G_0-T_0-W_0-P1_2-P2_1" "G_0-T_0-W_0-P1_2-P2_2" "G_0-T_0-W_0-P1_2-P2_3"
[19] "G_0-T_0-W_0-P1_2-P2_4" "G_0-T_0-W_0-P1_2-P2_5" "G_0-T_0-W_0-P1_2-P2_6"
[22] "G_0-T_0-W_0-P1_3-P2_0" "G_0-T_0-W_0-P1_3-P2_1" "G_0-T_0-W_0-P1_3-P2_2"
[25] "G_0-T_0-W_0-P1_3-P2_3" "G_0-T_0-W_0-P1_3-P2_4" "G_0-T_0-W_0-P1_3-P2_5"
[28] "G_0-T_0-W_0-P1_3-P2_6" "G_0-T_0-W_0-P1_4-P2_0" "G_0-T_0-W_0-P1_4-P2_1"
[31] "G_0-T_0-W_0-P1_4-P2_2" "G_0-T_0-W_0-P1_4-P2_3" "G_0-T_0-W_0-P1_4-P2_4"
[34] "G_0-T_0-W_0-P1_4-P2_5" "G_0-T_0-W_0-P1_4-P2_6"
From P1_4-P2_0 to P1_4-P2_6 the length is 113760 obs estand of 37920 because the time step change from 3 min to 1 min.
I’d like to have separated database for each variable in which I have date.time and value of variable for each names in column.
How I can do it?
Thank for any suggestion
I strongly suggest using a data structure that is appropriate for working with time series. In this case, the zoo package would work well. Load each CSV file into a zoo object, using your Date.Time column to define the index (timestamps) of the data. You can use the zoo() function to create those objects, for example.
Then use the merge function of zoo to combine the objects. It will find observations with the same timestamp and put them into one row. With merge, you can specify all=TRUE to get the union of all timestamps; or you can specify all=FALSE to get the intersection of the timestamps. For the union (all=TRUE), missing observations will be NA.
The read.zoo function could be difficult to use for reading your data. I suggest replacing your call to read.zoo with something like this:
table <- read.csv(filepath, header=TRUE, stringsAsFactors=FALSE)
dateStrings <- paste("2005/", table$Date.Time, sep="")
dates <- as.POSIXct(dateStrings)
dat <- zoo(table[,-1], dates)
(I assume that Date.Time is the first column in your file. That's why I wrote table[,-1].)

R sorts a vector on its own accord

df.sorted <- c("binned_walker1_1.grd", "binned_walker1_2.grd", "binned_walker1_3.grd",
"binned_walker1_4.grd", "binned_walker1_5.grd", "binned_walker1_6.grd",
"binned_walker2_1.grd", "binned_walker2_2.grd", "binned_walker3_1.grd",
"binned_walker3_2.grd", "binned_walker3_3.grd", "binned_walker3_4.grd",
"binned_walker3_5.grd", "binned_walker4_1.grd", "binned_walker4_2.grd",
"binned_walker4_3.grd", "binned_walker4_4.grd", "binned_walker4_5.grd",
"binned_walker5_1.grd", "binned_walker5_2.grd", "binned_walker5_3.grd",
"binned_walker5_4.grd", "binned_walker5_5.grd", "binned_walker5_6.grd",
"binned_walker6_1.grd", "binned_walker7_1.grd", "binned_walker7_2.grd",
"binned_walker7_3.grd", "binned_walker7_4.grd", "binned_walker7_5.grd",
"binned_walker8_1.grd", "binned_walker8_2.grd", "binned_walker9_1.grd",
"binned_walker9_2.grd", "binned_walker9_3.grd", "binned_walker9_4.grd",
"binned_walker10_1.grd", "binned_walker10_2.grd", "binned_walker10_3.grd")
One would expect that order of this vector would be 1:length(df.sorted), but that appears not to be the case. It looks like R internally sorts the vector according to its logic but tries really hard to display it the way it was created (and is seen in the output).
order(df.sorted)
[1] 37 38 39 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
[26] 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Is there a way to "reset" the ordering to 1:length(df.sorted)? That way, ordering, and the output of the vector would be in sync.
Use the mixedsort (or) mixedorder functions in package gtools:
require(gtools)
mixedorder(df.sorted)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
[28] 28 29 30 31 32 33 34 35 36 37 38 39
construct it as an ordered factor:
> df.new <- ordered(df.sorted,levels=df.sorted)
> order(df.new)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
EDIT :
After #DWins comment, I want to add that it is even not nessecary to make it an ordered factor, just a factor is enough if you give the right order of levels :
> df.new2 <- factor(df.sorted,levels=df.sorted)
> order(df.new)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
The difference will be noticeable when you use those factors in a regression analysis, they can be treated differently. The advantage of ordered factors is that they let you use comparison operators as < and >. This makes life sometimes a lot easier.
> df.new2[5] < df.new2[10]
[1] NA
Warning message:
In Ops.factor(df.new[5], df.new[10]) : < not meaningful for factors
> df.new[5] < df.new[10]
[1] TRUE
Isn't this simply the same thing you get with all lexicographic shorts (as e.g. ls on directories) where walker10_foo sorts higher than walker1_foo?
The easiest way around, in my book, is to use a consistent number of digits, i.e. I would change to binned_walker01_1.grd and so on inserting a 0 for the one-digit counts.
In response to Dwin's comment on Dirk's answer: the data are always putty in your hands. "This is R. There is no if. Only how." -- Simon Blomberg
You can add 0 like so:
df.sorted <- gsub("(walker)([[:digit:]]{1}_)", "\\10\\2", df.sorted)
If you needed to add 00, you do it like this:
df.sorted <- gsub("(walker)([[:digit:]]{1}_)", "\\10\\2", df.sorted)
df.sorted <- gsub("(walker)([[:digit:]]{2}_)", "\\10\\2", df.sorted)
...and so on.

Resources