Decimal hours in r (excluding todays date) - r

For a sample dataframe:
light <- structure(list(daylight.hours = structure(c(62L, 22L, 60L, 58L,
34L, 37L), .Label = c("07:12:05", "07:14:41", "07:18:24", "07:28:59",
"07:31:07", "07:45:51", "07:48:08", "07:51:29", "07:52:06", "07:58:18",
"08:01:16", "08:07:25", "08:10:08", "08:18:16", "08:23:33", "08:27:03",
"08:30:36", "08:34:13", "08:41:35", "08:46:01", "08:53:52", "08:54:17",
"09:31:16", "09:35:29", "09:39:44", "10:27:19", "10:31:45", "10:36:12",
"11:53:41", "12:11:39", "12:16:10", "12:20:23", "12:34:10", "14:18:26",
"14:22:41", "14:26:55", "14:35:21", "14:39:49", "14:44:00", "14:48:09",
"14:54:29", "14:59:08", "15:03:18", "15:11:01", "15:15:38", "15:15:52",
"15:19:09", "15:58:22", "16:07:10", "16:08:33", "16:24:12", "16:27:14",
"16:42:57", "16:55:32", "16:57:52", "17:00:06", "17:02:15", "17:03:49",
"17:04:17", "17:05:24", "17:06:14", "17:06:53", "17:08:05", "17:09:38",
"17:11:04", "17:12:24", "17:13:26", "17:13:47", "17:14:22", "17:14:32",
"17:14:42", "17:14:44", "17:15:39", "17:15:40", "17:16:22", "17:16:51",
"17:17:55"), class = "factor"), school.id = c(4L, 4L, 4L, 4L,
14L, 14L)), .Names = c("daylight.hours", "school.id"), row.names = c(NA,
6L), class = "data.frame")
I want to create another variable called d.daylight to change the daylight hours variable to a decimal. (i.e. 18:30:00 would read 18.5)
When I use the following it automatically puts todays date which is not what I am after (everything is under 24 hours).
light$d.daylight <- as.POSIXlt(light$daylight.hours, format="%H:%M:%S")
Could anyone advise me how to rectify this?

The times function from package chron is useful if you need to deal with times (without dates).
library(chron)
light$d.daylight <- as.numeric(times(light$daylight.hours)) * 24
# daylight.hours school.id d.daylight
#1 17:06:53 4 17.114722
#2 08:54:17 4 8.904722
#3 17:05:24 4 17.090000
#4 17:03:49 4 17.063611
#5 14:18:26 14 14.307222
#6 14:35:21 14 14.589167

Related

How can I combine rows of data when their character values are equal? (R)

I have a dataset that was recorded by observation(each observation has its own row of data). I am looking to combine/condense these rows by the plant they were found on - currently a character variable. All other columns are numerical vales.
EX:
This is the raw data
|Sci_Name|Honeybee_count|Other_bee_Obsevrved|Stem_count|
|---|---|---|---|
|Zizia aurea|1|5|10|
|Asclepias viridiflora|15|1|3|
|Viola unknown|0|0|4|
|Zizia aurea|0|2|6|
|Zizia aurea|3|6|3|
|Asclepias viridiflora|8|2|17|
and I want:
Sci_Name
Honeybee_count
Other_bee_Obsevrved
Stem_count
Zizia aurea
4
13
19
Asclepias viridiflora
23
3
20
Viola unknown
0
0
4
I am currently pulling this data from a CSV already in table form. I have been attempting to create a new table/data frame with one entry of each plant species, and blanks/0s for each other variable, which I can then use to c-binding the two together. This, however, has been clunky at best and I am having trouble figuring out how to have each row check itself. I am open to any approach, let me know what you think!
Thanks :D
We can use the formula method in aggregate from base R. On the rhs of the ~, specify the grouping variable and on the lhs, use . for denoting the rest of the variables. Specify the FUN as sum and it will do the column wise sum by group
aggregate(. ~ Sci_Name, df1, sum)
-output
Sci_Name Honeybee_count Other_bee_Obsevrved Stem_count
1 Asclepias viridiflora 23 3 20
2 Viola unknown 0 0 4
3 Zizia aurea 4 13 19
data
df1 <- structure(list(Sci_Name = c("Zizia aurea", "Asclepias viridiflora",
"Viola unknown", "Zizia aurea", "Zizia aurea", "Asclepias viridiflora"
), Honeybee_count = c(1L, 15L, 0L, 0L, 3L, 8L), Other_bee_Obsevrved = c(5L,
1L, 0L, 2L, 6L, 2L), Stem_count = c(10L, 3L, 4L, 6L, 3L, 17L)),
class = "data.frame", row.names = c(NA,
-6L))

How to efficiently select a random sample of variables from a set of variables in a dataframe

I would appreciate any help to randomly select a subset of var.w_X
containing 5 out of 10 var.w_X variables from my sample data sampleDT, while keeping all the other variables that do not start withvar.w_.
Below is the sample data sampleDT which contains, among other variables (those to be kept altogether), X variables starting with var.w_ in their names (those from which to draw the random sample).
In the current example, X=10, so that var.w_ includes var.w_1 to var.w_10, and I want to draw a random sample of 5 out of these 10. However, in my actual data, X>1,000,000and I might want to draw a sample of 7,500 var.w_ variables out of these X>1,000,000.
Therefore, accounting for efficiency is paramount in any given solution since recently I experienced some performance issues with mutate_at whose cause I still don't have an explanation.
Importantly, the other variables to keep (those that do not start with var.w_) are not guaranteed to stay in any pre-specified order, as they might be located before and/or between and/or after the var.w_ variables, for example. So solutions that rely on order of columns will not work.
#sample data
sampleDT<-structure(list(n = c(62L, 96L, 17L, 41L, 212L, 143L, 143L, 143L,
73L, 73L), r = c(3L, 1L, 0L, 2L, 170L, 21L, 0L, 33L, 62L, 17L
), p = c(0.0483870967741935, 0.0104166666666667, 0, 0.0487804878048781,
0.80188679245283, 0.146853146853147, 0, 0.230769230769231, 0.849315068493151,
0.232876712328767), var.w_8 = c(1.94254385942857, 1.18801169942857,
3.16131123942857, 3.16131123942857, 1.13482609242857, 1.13042157942857,
2.13042157942857, 1.13042157942857, 1.12335579942857, 1.12335579942857
), var.w_9 = c(1.942365288, 1.187833128, 3.161132668, 3.161132668,
1.134647521, 1.130243008, 2.130243008, 1.130243008, 1.123177228,
1.123177228), var.w_10 = c(1.94222639911111, 1.18769423911111,
3.16099377911111, 3.16099377911111, 1.13450863211111, 1.13010411911111,
2.13010411911111, 1.13010411911111, 1.12303833911111, 1.12303833911111
), group = c(1L, 1L, 0L, 1L, 0L, 1L, 1L, 0L,
0L, 0L), treat = c(0L, 0L, 0L, 0L, 0L, 1L, 1L, 1L, 1L, 1L), c1 = c(1.941115288,
1.186583128, 1.159882668, 1.159882668, 1.133397521, 1.128993008,
1.128993008, 1.128993008, 1.121927228, 1.121927228), var.w_6 = c(1.939115288, 1.184583128,
3.157882668, 3.157882668, 1.131397521, 1.126993008, 2.126993008,
1.126993008, 1.119927228, 1.119927228), var.w_7 = c(1.94278195466667,
1.18824979466667, 3.16154933466667, 3.16154933466667, 1.13506418766667,
1.13065967466667, 2.13065967466667, 1.13065967466667, 1.12359389466667,
1.12359389466667), c2 = c(0.1438,
0.237, 0.2774, 0.2774, 0.2093, 0.1206, 0.1707, 0.0699, 0.1351,
0.1206), var.w_1 = c(1.941115288, 1.186583128, 3.159882668, 3.159882668,
1.133397521, 1.128993008, 2.128993008, 1.128993008, 1.121927228,
1.121927228), var.w_2 = c(1.931115288, 1.176583128, 3.149882668,
3.149882668, 1.123397521, 1.118993008, 2.118993008, 1.118993008,
1.111927228, 1.111927228), var.w_3 = c(1.946115288, 1.191583128,
3.164882668, 3.164882668, 1.138397521, 1.133993008, 2.133993008,
1.133993008, 1.126927228, 1.126927228), var.w_4 = c(1.93778195466667,
1.18324979466667, 3.15654933466667, 3.15654933466667, 1.13006418766667,
1.12565967466667, 2.12565967466667, 1.12565967466667, 1.11859389466667,
1.11859389466667), var.w_5 = c(1.943615288, 1.189083128, 3.162382668,
3.162382668, 1.135897521, 1.131493008, 2.131493008, 1.131493008,
1.124427228, 1.124427228)), class = "data.frame", row.names = c(NA, -10L))
#my attempt
//based on the comment by #akrun - this does not keep the other variables as specified above
myvars <- sample(grep("var\\.w_", names(sampleDT), value = TRUE), 5)
sampleDT_test <- sampleDT[myvars]
Thanks in advance for any help
Apologies, had to step into a meeting for a little bit. So, I think you could adapt akrun's solution and keep the first columns for the sample dataframe. Let me know how this scales on the full dataframe. Also, thanks for clarifying further.
> # Subsetting the variable names not matching your pattern using grepl
> names(sampleDT)[!grepl("var\\.w_", names(sampleDT))]
[1] "n" "r" "p" "group" "treat" "c1" "c2"
>
> # Combine that with akrun's solution
> myvars <- c(names(sampleDT)[!grepl("var\\.w_", names(sampleDT))],
+ sample(grep("var\\.w_", names(sampleDT), value = TRUE), 5))
> head(sampleDT[myvars])
n r p group treat c1 c2 var.w_6 var.w_1 var.w_4 var.w_3 var.w_8
1 62 3 0.04838710 1 0 1.941115 0.1438 1.939115 1.941115 1.937782 1.946115 1.942544
2 96 1 0.01041667 1 0 1.186583 0.2370 1.184583 1.186583 1.183250 1.191583 1.188012
3 17 0 0.00000000 0 0 1.159883 0.2774 3.157883 3.159883 3.156549 3.164883 3.161311
4 41 2 0.04878049 1 0 1.159883 0.2774 3.157883 3.159883 3.156549 3.164883 3.161311
5 212 170 0.80188679 0 0 1.133398 0.2093 1.131398 1.133398 1.130064 1.138398 1.134826
6 143 21 0.14685315 1 1 1.128993 0.1206 1.126993 1.128993 1.125660 1.133993 1.130422

Distance between points in R: distCosine() for a table?

I have a list of position and I would like to know the distances between the closest points. I tried to use distCosine() but there is an issue. Here is what I did:
my data, sorted by lat
structure(list(lat = c(53.56478, 53.919724, 54.109047, 54.109047,
54.36612, 55.48143, 56.2335, 56.682796, 56.93616, 57.804092,
58.82089, 59.297623, 59.335075, 59.907795, 60.125046, 60.274445,
60.289204, 60.386665, 60.591167, 64.68329), long = c(14.585611,
14.286517, 13.807847, 13.807847, 10.997632, 18.182697, 16.454927,
16.564703, 18.221214, 23.258204, 17.84381, 18.172949, 18.126884,
23.217615, 20.65724, 26.44062, 27.189545, 19.847534, 28.5585,
24.534185)), .Names = c("lat", "long"), row.names = c(2L, 3L,
6L, 11L, 1L, 17L, 15L, 20L, 13L, 19L, 7L, 14L, 4L, 5L, 10L, 12L,
18L, 9L, 8L, 16L), class = "data.frame")
I tried to use distCosine() following an other discussion on stackoverflow to include in a new column the distance from the closest lat (this is why I sorted by lat):
data$a<-outer(seq(nrow(data)),
seq(nrow(data)),
Vectorize(function(i, j) distCosine(data[1,], data[2,]))
)
The result does not work... This is not the distance for each point...
is there an easier way to use distCosine for my request?
I think you just have to replace distCosine(data[1,], data[2,]) by distCosine(data[i,c("long","lat")], data[j,c("long","lat")]):
data <- head(data,5) # smaller example
data$a<-outer( seq(nrow(data)),
seq(nrow(data)),
Vectorize(
function(i, j) distCosine(data[i,c("long","lat")], data[j,c("long","lat")])
)
)
Result:
> data
lat long a.1 a.2 a.3 a.4 a.5
2 53.56478 14.58561 0.00 44146.92 79251.87 79251.87 251291.54
3 53.91972 14.28652 44146.92 0.00 37741.81 37741.81 220118.16
6 54.10905 13.80785 79251.87 37741.81 0.00 0.00 185040.01
11 54.10905 13.80785 79251.87 37741.81 0.00 0.00 185040.01
1 54.36612 10.99763 251291.54 220118.16 185040.01 185040.01 0.00
>
Got it with an other function:
data<-data[c("long","lat")]
distHaversine
t<-distHaversine(p1 = data[-nrow(data),],
p2 = data[-1,]) a<-0 final<-c(a,t) data$dist<-final
a<-0
final<-c(a,t)
data$dist<-final

how to grep to find and replace the first character in a row

I've got a data frame called newdataOrder, as follows:
Data
newdataOrder <-structure(list(X1 = c(1L, 5L, 5L, 5L, 7L, 8L, 8L, 8L, 9L, 10L
), X46425202 = c(184717073L, 561584L, 50107903L, 50107903L, 156680451L,
7156823L, 38227281L, 101279027L, 222268L, 109092539L), X46624292 = c(186846060L,
43795937L, 180611420L, 180611420L, 158620885L, 7328299L, 38404631L,
101431772L, 38427295L, 133471230L)), class = "data.frame", row.names = c(NA,
-10L))
1 46425202 46624292
1 184717073 186846060
5 561584 43795937
5 50107903 180611420
5 50107903 180611420
7 156680451 158620885
8 7156823 7328299
8 38227281 38404631
8 101279027 101431772
9 222268 38427295
10 109092539 133471230
I want to insert 'per' before the first digit in the first column. To try to do this I did:
newdataOrder <- grep("/^","per",newdataOrder[1])
but alas no joy. I've tried double, and triple backslashing the caret but no joy. Can anyone help?
rawr is right you just need to assign that to the dataframe column you want to replace so the complete code would be
newdataOrder[,1] <- paste('per', newdataOrder[,1])
then if you call newdataOrder it this column will be prepended with the "per", but you should note that this column is now not numeric.
You might want to use sub for replacements, because grep is for searching only:
newdataOrder[1] = sub("^","per",newdataOrder[1])

How can i convert a dataframe with a factor column to a xts object?

I have a csv file and when i use this command
SOLK<-read.table('Book1.csv',header=TRUE,sep=';')
I get this output
> SOLK
Time Close Volume
1 10:27:03,6 0,99 1000
2 10:32:58,4 0,98 100
3 10:34:16,9 0,98 600
4 10:35:46,0 0,97 500
5 10:35:50,6 0,96 50
6 10:35:50,6 0,96 1000
7 10:36:10,3 0,95 40
8 10:36:10,3 0,95 100
9 10:36:10,4 0,95 500
10 10:36:10,4 0,95 100
. . . .
. . . .
. . . .
285 17:09:44,0 0,96 404
Here is the result of dput(SOLK[1:10,]):
> dput(SOLK[1:10,])
structure(list(Time = structure(c(1L, 2L, 3L, 4L, 5L, 5L, 6L,
6L, 7L, 7L), .Label = c("10:27:03,6", "10:32:58,4", "10:34:16,9",
"10:35:46,0", "10:35:50,6", "10:36:10,3", "10:36:10,4", "10:36:30,8",
"10:37:23,3", "10:37:38,2", "10:37:39,3", "10:37:45,9", "10:39:07,5",
"10:39:07,6", "10:39:46,6", "10:41:21,8", "10:43:20,6", "10:43:36,4",
"10:43:48,8", "10:43:48,9", "10:43:54,6", "10:44:01,5", "10:44:08,4",
"10:45:47,2", "10:46:16,7", "10:47:03,6", "10:47:48,6", "10:47:55,0",
"10:48:09,9", "10:48:30,6", "10:49:20,6", "10:50:31,9", "10:50:34,6",
"10:50:38,1", "10:51:02,8", "10:51:11,5", "10:55:57,7", "10:57:57,2",
"10:59:06,9", "10:59:33,5", "11:00:31,0", "11:00:31,1", "11:04:46,4",
"11:04:53,4", "11:04:54,6", "11:04:56,1", "11:04:58,9", "11:05:02,0",
"11:05:02,6", "11:05:24,7", "11:05:56,7", "11:06:15,8", "11:13:24,1",
"11:13:24,2", "11:13:32,1", "11:13:36,2", "11:13:37,2", "11:13:44,5",
"11:13:46,8", "11:14:12,7", "11:14:19,4", "11:14:19,8", "11:14:21,2",
"11:14:38,7", "11:14:44,0", "11:14:44,5", "11:15:10,5", "11:15:10,6",
"11:15:12,9", "11:15:16,6", "11:15:23,3", "11:15:31,4", "11:15:36,4",
"11:15:37,4", "11:15:49,5", "11:16:01,4", "11:16:06,0", "11:17:56,2",
"11:19:08,1", "11:20:17,2", "11:26:39,4", "11:26:53,2", "11:27:39,5",
"11:28:33,0", "11:30:42,3", "11:31:00,7", "11:33:44,2", "11:39:56,1",
"11:40:07,3", "11:41:02,1", "11:41:30,1", "11:45:07,0", "11:45:26,6",
"11:49:50,8", "11:59:58,1", "12:03:49,9", "12:04:12,6", "12:06:05,8",
"12:06:49,2", "12:07:56,0", "12:09:37,7", "12:14:25,5", "12:14:32,1",
"12:15:42,1", "12:15:55,2", "12:16:36,9", "12:16:44,2", "12:18:00,3",
"12:18:12,8", "12:28:17,8", "12:28:17,9", "12:28:23,7", "12:28:51,1",
"12:36:33,2", "12:37:45,0", "12:39:22,2", "12:40:19,5", "12:42:22,1",
"12:58:46,3", "13:06:05,8", "13:06:05,9", "13:07:17,6", "13:07:17,7",
"13:09:01,3", "13:09:01,4", "13:09:11,3", "13:09:31,0", "13:10:07,8",
"13:35:43,8", "13:38:27,7", "14:11:16,0", "14:17:31,5", "14:26:13,9",
"14:36:11,8", "14:38:43,7", "14:38:47,8", "14:38:51,8", "14:48:26,7",
"14:52:07,4", "14:52:13,8", "15:09:24,7", "15:10:25,8", "15:29:12,1",
"15:31:55,9", "15:34:04,1", "15:44:10,8", "15:45:07,1", "15:57:04,9",
"15:57:13,9", "16:16:27,9", "16:21:41,7", "16:36:01,5", "16:36:13,2",
"16:46:10,5", "16:46:10,6", "16:47:37,3", "16:50:52,4", "16:50:52,5",
"16:51:44,5", "16:55:11,5", "16:56:21,8", "16:56:37,5", "16:57:37,9",
"16:58:18,6", "16:58:44,5", "17:00:39,1", "17:01:50,7", "17:03:13,2",
"17:03:28,3", "17:03:46,7", "17:03:47,0", "17:04:30,4", "17:08:41,8",
"17:09:44,0"), class = "factor"), Close = structure(c(8L, 7L,
7L, 6L, 5L, 5L, 4L, 4L, 4L, 4L), .Label = c("0,92", "0,93", "0,94",
"0,95", "0,96", "0,97", "0,98", "0,99"), class = "factor"), Volume = c(1000L,
100L, 600L, 500L, 50L, 1000L, 40L, 100L, 500L, 100L)), .Names = c("Time",
"Close", "Volume"), row.names = c(NA, 10L), class = "data.frame")
The first column includes the time stamp of every transaction during a stock's exchange daily session. I would like to convert the Close and Volume columns to an xts object ordered by the Time column.
UPDATE: From your edits, it appears you imported your data using two different commands. It also appears you should be using read.csv2. I've updated my answer with Lines that (I assume) look more like your original CSV (I have to guess because you don't say what the file looks like). The rest of the answer doesn't change.
You have to add a date to your times because xts stores all index values internally as POSIXct (I just used today's date).
I had to convert the "," decimal notation to the "." convention (using gsub), but that may be locale-dependent and you may not need to. paste today's date with the (possibly converted) time and then convert it to POSIXct to create an index suitable for xts.
I've also formatted the index so you can see the fractional seconds.
Lines <- "Time;Close;Volume
10:27:03,6;0,99;1000
10:32:58,4;0,98;100
10:34:16,9;0,98;600
10:35:46,0;0,97;500
10:35:50,6;0,96;50
10:35:50,6;0,96;1000
10:36:10,3;0,95;40
10:36:10,3;0,95;100
10:36:10,4;0,95;500
10:36:10,4;0,95;100"
SOLK <- read.csv2(con <- textConnection(Lines))
close(con)
solk <- xts(SOLK[,c("Close","Volume")],
as.POSIXct(paste("2011-09-02", gsub(",",".",SOLK[,1]))))
indexFormat(solk) <- "%Y-%m-%d %H:%M:%OS6"
solk
# Close Volume
# 2011-09-02 10:27:03.599999 0.99 1000
# 2011-09-02 10:32:58.400000 0.98 100
# 2011-09-02 10:34:16.900000 0.98 600
# 2011-09-02 10:35:46.000000 0.97 500
# 2011-09-02 10:35:50.599999 0.96 50
# 2011-09-02 10:35:50.599999 0.96 1000
# 2011-09-02 10:36:10.299999 0.95 40
# 2011-09-02 10:36:10.299999 0.95 100
# 2011-09-02 10:36:10.400000 0.95 500
# 2011-09-02 10:36:10.400000 0.95 100
That's an odd structure. Translating it to dput syntax
SOLK <- structure(list(structure(c(1L, 2L, 3L, 4L, 5L, 5L, 6L, 6L, 7L,
7L), .Label = c("10:27:03,6", "10:32:58,4", "10:34:16,9", "10:35:46,0",
"10:35:50,6", "10:36:10,3", "10:36:10,4"), class = "factor"),
Close = c(0.99, 0.98, 0.98, 0.97, 0.96, 0.96, 0.95, 0.95,
0.95, 0.95), Volume = c(1000L, 100L, 600L, 500L, 50L, 1000L,
40L, 100L, 500L, 100L)), .Names = c("", "Close", "Volume"
), class = "data.frame", row.names = c("1", "2", "3", "4", "5",
"6", "7", "8", "9", "10"))
I'm assuming the comma in the timestamp is decimal separator.
library("chron")
time.idx <- times(gsub(",",".",as.character(SOLK[[1]])))
Unfortunately, it seems xts won't take this as a valid order.by; so a date (today, for lack of a better choice) must be included to make xts happy.
xts(SOLK[[2]], order.by=chron(Sys.Date(), time.idx))

Resources