Dealing with date format in zoo - r

I've a csv data file with the following formats
Stock prices over the period of Jan 1, 2015 to Sep 26, 2017
Now I use the following code to import the data as zoo object:
sensexzoo1<- read.zoo(file = "/home/bidyut/Downloads/SENSEX.csv",
format="%d-%B-%Y", header=T, sep=",")
It produces the following error:
Error in read.zoo(file = "/home/bidyut/Downloads/SENSEX.csv", format =
"%d-%B-%Y", : index has 679 bad entries at data rows: 1 2 3 4 5 6
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
100 ...
What is the wrong with this? Please suggest

The problem is the mismatch between the header and the data. The header line has 5 fields and the remaining lines of the file have 6 fields:
head(count.fields("SENSEX.csv", sep = ","))
## [1] 5 6 6 6 6 6
When that happens it assumes that the first field of the data is the row names so by default the next field (which in fact contains the Open data) is assumed to be the time index.
We can address this in several alternative ways:
1) The easiest way to fix this is to add a field called Volume, say, to the header so that the header looks like this:
Date,Open,High,Low,Close,Volume
2) If you have many files of this format so that it is not feasible to modify them we can read the data in without the headers and then add them on in a second pass. The [, -5] drops the column of NAs and the [-1] on the second line drops the Date header.
z <- read.zoo("SENSEX.csv", format="%d-%B-%Y", sep = ",", skip = 1)[, -5]
names(z) <- unlist(read.table("SENSEX.csv", sep = ",", nrow = 1))[-1]
giving:
> head(z)
Open High Low Close
2015-01-01 27485.77 27545.61 27395.34 27507.54
2015-01-02 27521.28 27937.47 27519.26 27887.90
2015-01-05 27978.43 28064.49 27786.85 27842.32
2015-01-06 27694.23 27698.93 26937.06 26987.46
2015-01-07 26983.43 27051.60 26776.12 26908.82
2015-01-08 27178.77 27316.41 27101.94 27274.71
3) A third approach is to read the file in as text, use R to append ",Volume" to the first line and then read the text with read.zoo:
Lines <- readLines("SENSEX.csv")
Lines[1] <- paste0(Lines[1], ",Volume")
z <- read.zoo(text = Lines, header = TRUE, sep = ",", format="%d-%B-%Y")
Note: The first few lines of SENSEX.csv are shown below to make this self-contained (not dependent on the link in the question which could disappear in the future):
Date,Open,High,Low,Close
1-January-2015,27485.77,27545.61,27395.34,27507.54,
2-January-2015,27521.28,27937.47,27519.26,27887.90,
5-January-2015,27978.43,28064.49,27786.85,27842.32,
6-January-2015,27694.23,27698.93,26937.06,26987.46,
7-January-2015,26983.43,27051.60,26776.12,26908.82,
8-January-2015,27178.77,27316.41,27101.94,27274.71,
9-January-2015,27404.19,27507.67,27119.63,27458.38,
12-January-2015,27523.86,27620.66,27323.74,27585.27,
13-January-2015,27611.56,27670.19,27324.58,27425.73,
14-January-2015,27432.14,27512.80,27203.25,27346.82,

Related

Apply succinct function over subsets of columns of data frames in a list

I have a list (28 items) of dataframes (12 columns, 8 rows) named "n.l.df".
Statistics need to be applied row-wise on columns 1:3, 4:6, 7:9, 10:12, separately, within each dataframe. I am iterating through the list, calculating stats by doing the following:
library(tidyverse)
avgs <- n.l.df
avgs <- lapply(avgs, function(x) {
x[1,1] <-mean(as.numeric(x[1,1:3]))
x[2,1] <-mean(as.numeric(x[2,1:3]))
x[3,1] <-mean(as.numeric(x[3,1:3]))
x[4,1] <-mean(as.numeric(x[4,1:3]))
x[5,1] <-mean(as.numeric(x[5,1:3]))
x[6,1] <-mean(as.numeric(x[6,1:3]))
x[7,1] <-mean(as.numeric(x[7,1:3]))
x[8,1] <-mean(as.numeric(x[8,1:3]))
x[1,4] <-mean(as.numeric(x[1,4:6]))
x[2,4] <-mean(as.numeric(x[2,4:6]))
x[3,4] <-mean(as.numeric(x[3,4:6]))
x[4,4] <-mean(as.numeric(x[4,4:6]))
x[5,4] <-mean(as.numeric(x[5,4:6]))
x[6,4] <-mean(as.numeric(x[6,4:6]))
x[7,4] <-mean(as.numeric(x[7,4:6]))
x[8,4] <-mean(as.numeric(x[8,4:6]))
x[1,7] <-mean(as.numeric(x[1,7:9]))
x[2,7] <-mean(as.numeric(x[2,7:9]))
x[3,7] <-mean(as.numeric(x[3,7:9]))
x[4,7] <-mean(as.numeric(x[4,7:9]))
x[5,7] <-mean(as.numeric(x[5,7:9]))
x[6,7] <-mean(as.numeric(x[6,7:9]))
x[7,7] <-mean(as.numeric(x[7,7:9]))
x[8,7] <-mean(as.numeric(x[8,7:9]))
x[1,10] <-mean(as.numeric(x[1,10:12]))
x[2,10] <-mean(as.numeric(x[2,10:12]))
x[3,10] <-mean(as.numeric(x[3,10:12]))
x[4,10] <-mean(as.numeric(x[4,10:12]))
x[5,10] <-mean(as.numeric(x[5,10:12]))
x[6,10] <-mean(as.numeric(x[6,10:12]))
x[7,10] <-mean(as.numeric(x[7,10:12]))
x[8,10] <-mean(as.numeric(x[8,10:12]))
return(x)
})
This works nicely and I can strip out the unnecessary values in columns 2,3,5,6,8,9,11, and 12 when needed. I like that I do not have to gather the dataframes into long form and keeping it as a list of dataframes is preferable.
Clearly, this way is too repetitive and I think there must be a way to do a nested lapply/apply, but it is beyond my level. How can I simplify and shorten this code?
Thanks.
library(tidyverse)
# For reproducibility
set.seed(100)
# list of 28 random data frames
df_list <- rerun(28, data.frame(replicate(12,sample(1:100,8))))
# Use map to iterate over the list, using rowMeans and select to get means of select columns.
map(df_list, ~mutate(., rm_1_3 = rowMeans(select(., 1:3)),
rm_4_6 = rowMeans(select(., 4:6)),
rm_7_9 = rowMeans(select(., 7:9)),
rm_10_12 = rowMeans(select(., 10:12))))
[[1]]
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 rm_1_3 rm_4_6 rm_7_9 rm_10_12
1 31 55 21 43 35 34 21 13 45 58 46 31 35.66667 37.33333 26.33333 45.00000
2 26 17 36 17 95 86 31 23 36 96 60 73 26.33333 66.00000 30.00000 76.33333
3 55 62 99 76 69 77 33 59 100 65 91 89 72.00000 74.00000 64.00000 81.66667
4 6 86 67 86 87 81 20 21 44 61 96 21 53.00000 84.66667 28.33333 59.33333
5 45 27 52 53 18 58 23 45 24 83 4 35 41.33333 43.00000 30.66667 40.66667
6 46 38 68 27 60 47 27 62 66 74 55 43 50.66667 44.66667 51.66667 57.33333
7 77 72 51 46 94 74 56 91 39 79 69 86 66.66667 71.33333 62.00000 78.00000
8 35 63 70 87 13 83 24 63 31 9 24 37 56.00000 61.00000 39.33333 23.33333
This will give you a list of 28 data frames, with 4 columns of statistics added to each. If you just want the means, then substitute transmute for mutate

R sqlQuery function (in robdc package) regards character variable as numeric variable

> my_query <- paste("select * from", query_table, "where Arrived_Date_Time >=", arrived_earliest_date, "and Arrived_Date_Time < ", arrived_latest_date)
> dfDataIn <- sqlQuery(NSSP, my_query, stringsAsFactors=FALSE)
> odbcCloseAll()
> table(dfDataIn$Discharge_Disposition)
1 2 3 4 5 6 7 8 9 20 21
64059 336 1522 32 306 1166 2343 1 35423 312 36
30 41 43 50 51 61 62 63 64 65 66
26 18 295 133 200 5 270 76 3 1121 811
70 100
249 24
Actually dfDataIn$Discharge_Disposition is a character variable, and most importantly, most 1 here are supposed to be "01" in the database, whereas only minority are truly "1" in the database. (similarly for 2-9)
Is there any way to read the data in the right format?
You could try as.is = TRUE.
dfDataIn <- sqlQuery(NSSP, my_query, as.is = TRUE)
This will bring the data as is from the data source.

Mean and SD in R

maybe it is a very easy question. This is my data.frame:
> read.table("text.txt")
V1 V2
1 26 22516
2 28 17129
3 30 38470
4 32 12920
5 34 30835
6 36 36244
7 38 24482
8 40 67482
9 42 23121
10 44 51643
11 46 61064
12 48 37678
13 50 98817
14 52 31741
15 54 74672
16 56 85648
17 58 53813
18 60 135534
19 62 46621
20 64 89266
21 66 99818
22 68 60071
23 70 168558
24 72 67059
25 74 194730
26 76 278473
27 78 217860
It means that I have 22516 sequences with length 26, 17129 sequences with length 28, etc. I would like to know the sequence length mean and its standard deviation. I know how to do it, but I know to do it creating a list full of 26 repeated 22516 times and so on... and then compute the mean and SD. However, I thing there is a easier method. Any idea?
Thanks.
For mean: (V1 %*% V2)/sum(V2)
For SD: sqrt(((V1-(V1 %*% V2)/sum(V2))**2 %*% V2)/sum(V2))
I do not find mean(rep(V1,V2)) # 61.902 and sd(rep(V1,V2)) # 14.23891 that complex, but alternatively you might try:
weighted.mean(V1,V2) # 61.902
# recipe from http://www.ltcconline.net/greenl/courses/201/descstat/meansdgrouped.htm
sqrt((sum((V1^2)*V2)-(sum(V1*V2)^2)/sum(V2))/(sum(V2)-1)) # 14.23891
Step1: Set up data:
dat.df <- read.table(text="id V1 V2
1 26 22516
2 28 17129
3 30 38470
4 32 12920
5 34 30835
6 36 36244
7 38 24482
8 40 67482
9 42 23121
10 44 51643
11 46 61064
12 48 37678
13 50 98817
14 52 31741
15 54 74672
16 56 85648
17 58 53813
18 60 135534
19 62 46621
20 64 89266
21 66 99818
22 68 60071
23 70 168558
24 72 67059
25 74 194730
26 76 278473
27 78 217860",header=T)
Step2: Convert to data.table (only for simplicity and laziness in typing)
library(data.table)
dat <- data.table(dat.df)
Step3: Set up new columns with products, and use them to find mean
dat[,pr:=V1*V2]
dat[,v1sq:=as.numeric(V1*V1*V2)]
dat.Mean <- sum(dat$pr)/sum(dat$V2)
dat.SD <- sqrt( (sum(dat$v1sq)/sum(dat$V2)) - dat.Mean^2)
Hope this helps!!
MEAN = (V1*V2)/sum(V2)
SD = sqrt((V1*V1*V2)/sum(V2) - MEAN^2)

Overlay two differently formatted qplots in ggplot2

I have two scatterplots, based on different but related data, created using qplot() from ggplot2. (Learning ggplot hasn't been a priority because qplot has been sufficient for my needs up to now). What I want to do is superimpose/overlay the two charts so that the x,y data for each is plotted in the same plot space. The complication is that I want each plot to retain its formatting/aesthetics.
That data in question are row and column scores from correspondence analysis - corresp() from MASS - so the number of data rows (i.e. samples or taxa) differ between the two datasets. I can plot the two score sets together easily. Either by combing the two datasets or, even easier, just using the biplot() function.
However, I have been using qplot to get the plots looking exactly as I need them; with samples plotted as colour-coded symbols and taxa as labels:
PlotSample <- qplot(DataCorresp$rscore[,1], DataCorresp$rscore[,2],
colour=factor(DataAll$ColourCode)) +
scale_colour_manual(values = c("black","darkgoldenrod2",
"deepskyblue2","deeppink2"))
and
PlotTaxa <- qplot(DataCorresp$cscore[,1], DataCorresp$cscore[,2],
label=colnames(DataCorresp), size=10, geom=“text”)
Can anyone suggest a way by which either
the two plots (PlotSample and PlotTaxa) can be superimposed atop of each other,
the two datasets (DataCorresp$rscore and DataCorresp$cscore) can be plotted together but formatted in their different ways, or
another function (e.g. biplot()) that could be used to achieve my aim.
Example of workflow using a extremely simplified and made-up dataset:
> require(MASS)
> require(ggplot2)
> alldata<-read.csv("Fake data.csv",header=T,row.name=1)
> selectdata<-alldata[,2:10]
> alldata
Period Species.1 Species.2 Species.3 Species.4 Species.5 Species.6
Sample-1 Early 50 87 97 12 60 49
Sample-2 Early 41 90 36 52 36 27
Sample-3 Early 87 56 82 45 56 13
Sample-4 Early 37 47 78 29 53 34
Sample-5 Early 58 70 34 35 8 21
Sample-6 Early 94 82 48 16 27 26
Sample-7 Early 91 69 50 57 24 13
Sample-8 Early 63 38 86 20 28 11
Sample-9 Middle 4 19 55 99 86 38
Sample-10 Middle 29 25 10 93 37 54
Sample-11 Middle 48 12 59 73 39 92
Sample-12 Middle 31 6 34 81 39 54
Sample-13 Middle 29 40 26 52 34 84
Sample-14 Middle 1 46 15 97 67 41
Sample-15 Late 43 47 30 18 60 23
Sample-16 Late 45 10 49 2 2 45
Sample-17 Late 14 8 51 36 58 51
Sample-18 Late 41 51 32 47 23 43
Sample-19 Late 43 17 6 54 4 12
Sample-20 Late 20 25 1 29 35 2
Species.7 Species.8 Species.9
Sample-1 41 39 57
Sample-2 59 4 45
Sample-3 10 56 5
Sample-4 59 30 39
Sample-5 9 29 57
Sample-6 29 24 35
Sample-7 22 4 42
Sample-8 31 19 40
Sample-9 17 7 57
Sample-10 6 9 29
Sample-11 34 20 0
Sample-12 56 41 59
Sample-13 6 31 13
Sample-14 25 12 28
Sample-15 60 75 84
Sample-16 32 69 34
Sample-17 48 53 56
Sample-18 80 86 46
Sample-19 50 70 82
Sample-20 57 84 70
> biplot(selectca,cex=c(0.6,0.6))
> selectca<-corresp(selectdata,nf=5)
> PlotSample <- qplot(selectca$rscore[,1], selectca$rscore[,2], colour=factor(alldata$Period) )
> PlotTaxa<-qplot(selectca$cscore[,1], selectca$cscore[,2], label=colnames(selectdata), size=10, geom="text")
The biplot will produce this plot: /r/10wk1a8/5
The PlotSample appears as such: /r/i29cba/5
The PlotTaxa appears as such: /r/245bl9d/5
EDIT so don't have enough rep to post pictures and tinypic links not accepted (despite https://meta.stackexchange.com/questions/60563/how-to-upload-images-on-stack-overflow). So if you add tinypic's URL to the start of those codes above you'll get there.
Essentially I want to creat the biplot plot but with samples colour coded as they are in PlotSample.
Have a look at Gavin Simpsons ggvegan-package!
require(vegan)
require(ggvegan)
# some data
data(dune)
# CA
mod <- cca(dune)
# plot
autoplot(mod, geom = 'text')
For a finer control (or if you want to stick with corresp(), you may also want to take a look at the code of the two involved functions fortify.cca (which wraps the data in the cca objects into a useable format for ggplot) and autoplot.cca for creating the plot.
I you want to do it from scratch, you'll have to wrap both scores (sites and species) into one data.frame (see how fortify.cca does this and extract the relevant values from the corresp() object) and use this to build the plot.

In R: Indexing vectors by boolean comparison of a value in range: index==c(min : max)

In R, let's say we have a vector
area = c(rep(c(26:30), 5), rep(c(500:504), 5), rep(c(550:554), 5), rep(c(76:80), 5)) and another vector yield = c(1:100).
Now, say I want to index like so:
> yield[area==27]
[1] 2 7 12 17 22
> yield[area==501]
[1] 27 32 37 42 47
No problem, right? But weird things start happening when I try to index it by using c(A, B). (and even weirder when I try c(min:max) ...)
> yield[area==c(27,501)]
[1] 7 17 32 42
What I'm expecting is of course the instances that are present in both of the other examples, not just some weird combination of them. This works when I can use the pipe OR operator:
> yield[area==27 | area==501]
[1] 2 7 12 17 22 27 32 37 42 47
But what if I'm working with a range? Say I want index it by the range c(27:503)? In my real example there are a lot more data points and ranges, so it makes more sense, please don't suggest I do it by hand, which would essentially mean:
yield[area==27 | area==28 | area==29 | ... | area==303 | ... | area==500 | area==501]
There must be a better way...
You want to use %in%. Also notice that c(27:503) and 27:503 yield the same object.
> yield[area %in% 27:503]
[1] 2 3 4 5 7 8 9 10 12 13 14 15 17
[14] 18 19 20 22 23 24 25 26 27 28 29 31 32
[27] 33 34 36 37 38 39 41 42 43 44 46 47 48
[40] 49 76 77 78 79 80 81 82 83 84 85 86 87
[53] 88 89 90 91 92 93 94 95 96 97 98 99 100
Why not use subset?
subset(yield, area > 26 & area < 504) ## for indexes
subset(area, area > 26 & area < 504) ## for values

Resources