Supposed that I have generated 100 different value from -100 to 100, and I have cumsum all of those value.
set.seed(123)
x <- -100:100
z <- sample (x, size = 100,replace=T)
cumsum(z)
and I got
[1] 58 136 49 143 212 161 178 120 33 50 102 91 81 177 167 251 242 278 276 247 172 78 147
[24] 183 246 223 203 145 147 163 138 180 111 119 25 61 129 102 24 78 165 117 151 103 157 222
[47] 155 123 94 69 31 71 67 57 109 46 -34 -94 -20 -31 -72 -157 -142 -149 -244 -145 -160 -175 -237
[70] -179 -162 -213 -280 -377 -465 -497 -471 -419 -468 -547 -559 -500 -576 -642 -575 -564 -635 -596 -538 -518 -509 -452
[93] -489 -448 -350 -384 -334 -313 -335 -351
Now, I would like to stop or find out the value that is greater than 200 or lower than -200.
If I do it by my hand, I know that the 5th sequence (212) is greater than 200.
However, in R, is there any command to find out the first time that z is greater than 200 or lower than -200?
Thank you very much
A quick hack way to do this might be:
z <- as.data.frame(z)
z$lv <- if_else(z >200,T,F)
min(which(lv == TRUE))
The min(which(...)) solutions provided by others don't give a convenient answer in case none of the values meet the condition. For example,
set.seed(123)
x <- -100:100
z <- sample (x, size = 100,replace=T)
min(which(abs(cumsum(z)) > 200))
#> [1] 5
min(which(abs(cumsum(z)) > 1000)) # None meet this condition
#> Warning in min(which(abs(cumsum(z)) > 1000)): no non-missing arguments to min;
#> returning Inf
#> [1] Inf
A better way is given in the R help page for which.max:
match(TRUE, abs(cumsum(z)) > 200)
#> [1] 5
match(TRUE, abs(cumsum(z)) > 1000)
#> [1] NA
Related
I want to install the DESeq2 package so that I can step through it with the debugger.
The source code for this package is available through GitHub, but it's not clear to me how to install the package so that I can step through its R code in the debugger.
Is there a way to do this?
BTW, I tried the approaches proposed in this earlier thread, but I get nowhere:
> trace(DESeq2::plotPCA, browser, at=1)
> devnull <- DESeq2::plotPCA(rld, intgroup = "q", returnData = TRUE)
Tracing DESeq2::plotPCA(rld, intgroup = "q", returnData = TRUE) step 1
Called from: eval(expr, envir, enclos)
Browse[1]> n
debug: `{`
Browse[2]> n
debug: standardGeneric("plotPCA")
Browse[2]> n
>
(I.e., after the last n above, I'm back at the top-level prompt.)
If I type DESeq2::plotPCA at the top-level prompt, all I get is
> DESeq2::plotPCA
nonstandardGenericFunction for "plotPCA" defined from package "BiocGenerics"
function (object, ...)
{
standardGeneric("plotPCA")
}
<environment: 0x26bee20>
Methods may be defined for arguments: object
Use showMethods("plotPCA") for currently available ones.
I also tried just sourcing the source file where DESeq2::plotPCA is defined, but this fails with
Error in setMethod("plotDispEsts", signature(object = "DESeqDataSet"), :
no existing definition for function ‘plotDispEsts’
So clearly one needs to do some setup before sourcing this file. This realization is what led to this post.
Use debug() with the signature= argument, e.g.,
> showMethods("plotPCA")
Function: plotPCA (package BiocGenerics)
object="DESeqTransform"
> debug(plotPCA, signature="DESeqTransform")
Tracing specified method for function "plotPCA" in environment
<namespace:BiocGenerics>
No need for a special installation, just BiocInstaller::biocLite("DESeq2").
Martin Morgan's answer captures the essence of the solution, but there are several gotchya's that are very confusing for new R users. These gotchya's stem from the unique form of object orientation that R uses, which is incredibly confusing for those who are coming from a C/C++ or Python background.
Following the DESeq2 vignette with my own data set:
$ cat synth.dat
sample g0 g1 g2 g3 g4 g5 g6 g7 g8 g9
samp0 132 192 19 133 247 297 110 104 93 103
samp1 173 152 23 139 245 307 83 77 76 123
samp2 179 129 18 130 208 244 89 138 71 142
samp3 178 145 22 157 323 277 79 93 102 97
samp4 250 208 8 101 202 257 142 140 76 113
samp5 221 157 12 79 261 341 140 94 56 123
samp6 139 220 15 125 282 261 124 154 117 118
samp7 213 121 16 115 377 322 117 154 57 81
samp8 234 152 11 103 281 321 76 160 71 139
samp9 254 120 13 134 323 207 122 122 82 91
samp10 159 207 17 143 385 217 126 113 106 89
samp11 214 136 14 90 364 365 149 102 93 111
samp12 180 159 15 136 226 309 72 111 69 113
samp13 151 137 17 122 229 297 131 108 112 70
samp14 254 151 8 118 254 222 138 114 66 89
samp15 275 121 13 105 238 408 122 156 57 72
samp16 204 134 8 111 352 332 89 134 73 90
samp17 265 144 11 144 211 281 134 98 71 114
samp18 212 111 14 138 321 391 84 112 88 96
samp19 155 164 12 119 174 380 129 106 66 86
$ cat synth_design_matrix.txt
samp0 group0
samp1 group0
samp2 group0
samp3 group0
samp4 group0
samp5 group0
samp6 group0
samp7 group0
samp8 group0
samp9 group0
samp10 group1
samp11 group1
samp12 group1
samp13 group1
samp14 group1
samp15 group1
samp16 group1
samp17 group1
samp18 group1
samp19 group1
> library("DESeq2")
> dat <- read.table(file="synth.dat", header=TRUE, stringsAsFactors=FALSE, row.names=1)
> groups <- read.table(file="synth_design_matrix.txt", header=FALSE, stringsAsFactors=TRUE, row.names=1)
> colnames(groups) <- c("condition")
> datM <- t(as.matrix(dat))
> dds <- DESeqDataSetFromMatrix(countData = datM, colData = groups, design = ~condition)
> dds$condition <-relevel(dds$condition, ref="group0")
> vsd <- vst(dds, blind=FALSE, nsub=10)
-- note: fitType='parametric', but the dispersion trend was not well captured by the
function: y = a/x + b, and a local regression fit was automatically substituted.
specify fitType='local' or 'mean' to avoid this message next time.
Now specify the trace point and step through.
> trace(what="plotPCA", tracer=browser, at=1, signature=c("DESeqTransform"))
[1] "plotPCA"
> plotPCA(vsd)
Tracing function ".local" in package "DESeq2"
Tracing .local(object, ...) step 1
Called from: eval(expr, p)
Browse[1]> n
debug: `{`
Browse[2]> n
debug at /tmp/RtmpaiGvIe/R.INSTALL5ef336529904/DESeq2/R/plots.R#184: rv <- rowVars(assay(object))
Browse[2]> n
debug at /tmp/RtmpaiGvIe/R.INSTALL5ef336529904/DESeq2/R/plots.R#187: select <- order(rv, decreasing = TRUE)[seq_len(min(ntop, length(rv)))]
Browse[2]> n
debug at /tmp/RtmpaiGvIe/R.INSTALL5ef336529904/DESeq2/R/plots.R#190: pca <- prcomp(t(assay(object)[select, ]))
Browse[2]> c
Now here are the gotchya's:
You can only directly set trace points on functions/classes that are exported in the package. They will contain a #' #export. See Hillary Parker's blog for concise details. In this example, we end up stepping through the plotPCA method and into the plotPCA.DESeqTransform function which is not visible in the DESeq2 namespace.
If there are more than one argument for the method's signature, you need to specify it with R's c().
E.g. if the method prototype is:
setMethod("spin", signature(object="star", value="numeric"), function(object, value){some stuff here})
The tracepoint would be
trace(what="spin", tracer=browser, at=1, signature=c(object="star", value="numeric"))
Beware of replace methods. If you don't understand them, it could be very confusing when debugging a package like DESeq2 (which has several). See here and here for more details.
Familiarize yourself with S4 and S3 methods and R's object orientation. This will make it easier to understand what is happening within the package.
With these tools, you should be able to debug any R package downloaded CRAN or Bioconductor without any special installation instructions.
First of all, sorry for any mistakes regarding my post, I'm new to this site.
I´m getting started with R now and I´m trying to do some analysis with time series data.
So, I got a times series at hand and already loaded it into R.
I can also plot this times series and add labels to the axes and so on. So far so good.
My problem: When I plot the time series, R would set the range of values on the y-axis to the interval of [0:170] approximately.
This is somehow strange, since the times series contains the daily EUR/USD exchange rates for this year. That means the values are in a range of about 1.05 to 1.2.
The relative values are correct.
If the plot shows a maximum around day 40, the corresponding value in the data set appears to be a maximum.
But it is around 1.4 and not 170.
I hope one can understand my problem.
I would like to have the y-axis on a scale from 1 to 1.2 for example.
The ylim=c(1, 1.2) command will scale the axis to that range but not the values.
It just ignores them.
Does anyone know how to adjust that?
I´d really appreciate it.
Thank you very much in advance.
Thanks a lot for the input so far.
The "critical code" is the following:
> FRB <- read.csv("FRB_H10.csv", header=TRUE, sep=",")
> attach(FRB)
> str(FRB)
'data.frame': 212 obs. of 2 variables:
$ Date: Factor w/ 212 levels "2015-01-01","2015-01-02",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Rate: Factor w/ 180 levels "1.0524","1.0575",..: 180 179 177 178 174 173 175 176 171 172 ...
> plot.ts(Rate)
The result of this last plot is the one shown above.
Changing the variable to numeric yields this:
> as.numeric(Rate)
[1] 180 179 177 178 174 173 175 176 171 172 170 166 180 167 169 160 123 128 150 140 132 128 138 165
[25] 161 163 136 134 134 129 159 158 180 156 140 155 151 142 131 148 104 100 96 104 65 53 27 24
[49] 13 3 8 1 2 7 10 9 21 42 36 50 39 33 23 15 19 29 51 54 26 23 11 6
[73] 4 12 5 16 20 18 17 14 22 30 34 49 92 89 98 83 92 141 125 110 81 109 151 149
[97] 162 143 85 69 77 61 180 30 32 38 52 37 78 127 120 73 105 126 131 106 122 119 107 112
[121] 157 137 152 96 93 99 87 94 86 70 71 180 67 43 66 58 84 57 55 47 35 25 26 41
[145] 31 48 48 75 63 59 38 60 46 44 28 40 45 52 62 101 82 74 68 60 64 102 144 168
[169] 159 154 108 91 98 118 111 72 76 180 95 90 117 139 131 116 130 133 145 103 79 88 115 97
[193] 106 113 89 102 121 102 119 114 124 148 180 153 164 161 147 135 146 141 80 56
So, it remains unchanged. This is very strange. The data excerpt shows that "Rate" takes on values between 1.1 and 1.5 approximately, so really not the values that are shown above. :/
The data set can be found under this link:
https://www.dropbox.com/s/ndxstdl1aae5glt/FRB_H10.csv?dl=0
It should be alright. I got it from the data base from the Federal Reserve System, so quite a decent source.
(Had to remove the link to the data excerpt because my reputation only allows for 2 links to be posted at a time. But the entire data set should be even better, I guess.
#BlankUsername
Thanks very much for the link. I got it working now using this code:
FRB <- read.csv("FRB_H10.csv", header=TRUE, sep=",")
> attach(FRB)
> as.numeric(paste(Rate))
[1] NA 1.2015 1.1918 1.1936 1.1820 1.1811 1.1830 1.1832 1.1779 1.1806 1.1598 1.1517 NA
[14] 1.1559 1.1584 1.1414 1.1279 1.1290 1.1370 1.1342 1.1308 1.1290 1.1337 1.1462 1.1418 1.1432
[27] 1.1330 1.1316 1.1316 1.1300 1.1410 1.1408 NA 1.1395 1.1342 1.1392 1.1372 1.1346 1.1307
[40] 1.1363 1.1212 1.1197 1.1190 1.1212 1.1070 1.1006 1.0855 1.0846 1.0707 1.0576 1.0615 1.0524
[53] 1.0575 1.0605 1.0643 1.0621 1.0792 1.0928 1.0908 1.0986 1.0919 1.0891 1.0818 1.0741 1.0768
[66] 1.0874 1.0990 1.1008 1.0850 1.0818 1.0671 1.0598 1.0582 1.0672 1.0596 1.0742 1.0780 1.0763
[79] 1.0758 1.0729 1.0803 1.0876 1.0892 1.0979 1.1174 1.1162 1.1194 1.1145 1.1174 1.1345 1.1283
[92] 1.1241 1.1142 1.1240 1.1372 1.1368 1.1428 1.1354 1.1151 1.1079 1.1126 1.1033 NA 1.0876
[105] 1.0888 1.0914 1.0994 1.0913 1.1130 1.1285 1.1271 1.1108 1.1232 1.1284 1.1307 1.1236 1.1278
[118] 1.1266 1.1238 1.1244 1.1404 1.1335 1.1378 1.1190 1.1178 1.1196 1.1156 1.1180 1.1154 1.1084
[131] 1.1090 NA 1.1076 1.0952 1.1072 1.1025 1.1150 1.1020 1.1015 1.0965 1.0898 1.0848 1.0850
[144] 1.0927 1.0884 1.0976 1.0976 1.1112 1.1055 1.1026 1.0914 1.1028 1.0962 1.0953 1.0868 1.0922
[157] 1.0958 1.0994 1.1042 1.1198 1.1144 1.1110 1.1078 1.1028 1.1061 1.1200 1.1356 1.1580 1.1410
[170] 1.1390 1.1239 1.1172 1.1194 1.1263 1.1242 1.1104 1.1117 NA 1.1182 1.1165 1.1262 1.1338
[183] 1.1307 1.1260 1.1304 1.1312 1.1358 1.1204 1.1133 1.1160 1.1252 1.1192 1.1236 1.1246 1.1162
[196] 1.1200 1.1276 1.1200 1.1266 1.1249 1.1282 1.1363 NA 1.1382 1.1437 1.1418 1.1360 1.1320
[209] 1.1359 1.1345 1.1140 1.1016
Warning message:
NAs introduced by coercion
> Rate <- cbind(paste(Rate))
> plot(Rate)
Warning message:
In xy.coords(x, y, xlabel, ylabel, log) : NAs introduced by coercion
> plot.ts(Rate, ylab="EUR/USD")
Despite the warning message, I get the following output (shown below). Like I intended to plot it.
Nevertheless, I do not really understand why it works the way it did. Why I have to use the paste() command and what it does exactly. I get the basic idea of what the classes do, but am very new to this whole world of R.
One thing I came to realize already is that R is such a powerful program. And yet confusing if you are a beginner. :D
I have a factor variable represented by the histogram bins with values: '660-664' , ... , '740-744' , 745-749' ..
How can I map the factor variable to its mean value, e.g mapping '660-664' to 662?
Basically, what I'm looking for is the inverse of the "cut" function.
You can make use of the plot = FALSE argument from hist to extract the breaks, then use that to get your midpoints:
set.seed(1)
x <- sample(300, 30)
x
# [1] 80 112 171 270 60 266 278 194 184 18 296 52 198 111 221 142
# [17] 204 281 108 219 262 290 182 35 74 107 4 105 237 93
temp <- hist(x, plot = FALSE)$breaks
temp
# [1] 0 50 100 150 200 250 300
rowMeans(cbind(head(temp, -1),
tail(temp, -1)))
# [1] 25 75 125 175 225 275
Update: Calculating the mean from a character string of ranges
Judging by your comments, you might be looking for something more like this:
myVec <- c("735-739", "715-719", "690-694", "695-699", "695-699",
"670-674", "720-724", "705-709", "685-689")
myVec
# [1] "735-739" "715-719" "690-694" "695-699" "695-699" "670-674"
# [7] "720-724" "705-709" "685-689"
sapply(strsplit(myVec, "-"), function(x) mean(as.numeric(x)))
# [1] 737 717 692 697 697 672 722 707 687
I am trying to take the following data, and then uses this data to create a table which has the information broken down by state.
Here's the data:
> head(mydf2, 10)
lead_id buyer_account_id amount state
1 52055267 62 300 CA
2 52055267 64 264 CA
3 52055305 64 152 CA
4 52057682 62 75 NJ
5 52060519 62 750 OR
6 52060519 64 574 OR
15 52065951 64 152 TN
17 52066749 62 600 CO
18 52062751 64 167 OR
20 52071186 64 925 MN
I've allready subset the states that I'm interested in and have just the data I'm interested in:
mydf2 = subset(mydf, state %in% c("NV","AL","OR","CO","TN","SC","MN","NJ","KY","CA"))
Here's an idea of what I'm looking for:
State Amount Count
NV 1 50
NV 2 35
NV 3 20
NV 4 15
AL 1 10
AL 2 6
AL 3 4
AL 4 1
...
For each state, I'm trying to find a count for each amount "level." I don't necessary need to group the amount variable, but keep in mind that they are are not just 1,2,3, etc
> mydf$amount
[1] 300 264 152 75 750 574 113 152 750 152 675 489 188 263 152 152 600 167 34 925 375 156 675 152 488 204 152 152
[29] 600 489 488 75 152 152 489 222 563 215 452 152 152 75 100 113 152 150 152 150 152 452 150 152 152 225 600 620
[57] 113 152 150 152 152 152 152 152 152 152 640 236 152 480 152 152 200 152 560 152 240 222 152 152 120 257 152 400
Is there an elegant solution for this in R for this or will I be stuck using Excel (yuck!).
Here's my understanding of what you're trying to do:
Start with a simple data.frame with 26 states and amounts only ranging from 1 to 50 (which is much more restrictive than what you have in your example, where the range is much higher).
set.seed(1)
mydf <- data.frame(
state = sample(letters, 500, replace = TRUE),
amount = sample(1:50, 500, replace = TRUE)
)
head(mydf)
# state amount
# 1 g 28
# 2 j 35
# 3 o 33
# 4 x 34
# 5 f 24
# 6 x 49
Here's some straightforward tabulation. I've also removed any instances where frequency equals zero, and I've reordered the output by state.
temp1 <- data.frame(table(mydf$state, mydf$amount))
temp1 <- temp1[!temp1$Freq == 0, ]
head(temp1[order(temp1$Var1), ])
# Var1 Var2 Freq
# 79 a 4 1
# 157 a 7 2
# 391 a 16 1
# 417 a 17 1
# 521 a 21 1
# 1041 a 41 1
dim(temp1) # How many rows/cols
# [1] 410 3
Here's a little bit different tabulation. We are tabulating after grouping the "amount" values. Here, I've manually specified the breaks, but you could just as easily let R decide what it thinks is best.
temp2 <- data.frame(table(mydf$state,
cut(mydf$amount,
breaks = c(0, 12.5, 25, 37.5, 50),
include.lowest = TRUE)))
temp2 <- temp2[!temp2$Freq == 0, ]
head(temp2[order(temp2$Var1), ])
# Var1 Var2 Freq
# 1 a [0,12.5] 3
# 27 a (12.5,25] 3
# 79 a (37.5,50] 3
# 2 b [0,12.5] 2
# 28 b (12.5,25] 6
# 54 b (25,37.5] 5
dim(temp2)
# [1] 103 3
I am not sure if I understand correctly (you have two data.frames mydf and mydf2). I'll assume your data is in mydf. Using aggregate:
mydf$count <- 1:nrow(mydf)
aggregate(data = mydf, count ~ amount + state, length)
Is this what you are looking for?
Note: here count is a variable that is created just to get directly the output of the 3rd column as count.
Alternatives with ddply from plyr:
# no need to create a variable called count
ddply(mydf, .(state, amount), summarise, count=length(lead_id))
Here' one could use any column that exists in one's data instead of lead_id. Even state:
ddply(mydf, .(state, amount), summarise, count=length(state))
Or equivalently without using summarise:
ddply(mydf, .(state, amount), function(x) c(count=nrow(x)))
My objective is to list the drift coefficient from a random walk with drift forecast function, applied to a set of historical data (below). Specifically I am trying to gather the drift coefficient starting from the random walk with drift model of the first year, then cumulatively to the last, recording the coefficient each time, meaning iteratively or each additional year (recording this into a list? if that is appropriate). To be clear each new random walk forecast is including all the previous years.
The data is a list of 241 consumption levels, and I am attempting to discern how the drift coefficent would change over the course of iteratively progressing from n=1 to n=241
Where for example the random walk with drift model is Y[t] = c + Y[t-1] + Z[t] where Z[t] is a normal error and c is the coefficient i am looking for. My current attempts at this involve a for loop function and extracting the c coefficient from the rwf() function from the "Forecast" package in R.
To extract this, I am doing as such
rwf(x, h = 1, drift = TRUE)$model[[1]]
which extracts the drift coefficient.
The problem is, my attempts at subsetting the data within the rwf call have failed, and I also don't believe, through trial and error and research, that rwf() supports the subset argument, as an lm model does for example. In this sense my attempts at looping the function have also failed.
An example of such code is
for (i in 1:5){print((rwf(x[1:i], h = 1, drift = TRUE))$model[[1]])}
which gives me the following error
Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) :
0 (non-NA) cases
In addition: Warning message:
In is.na(rows) : is.na() applied to non-(list or vector) of type 'NULL'
Any help would be much appreciated.
I read SO a lot for help but this is my first time asking a question.
The data is as follows
PCE
1 1306.7
2 1309.6
3 1335.3
4 1341.8
5 1389.2
6 1405.7
7 1414.2
8 1411.0
9 1401.6
10 1406.7
11 1425.0
12 1444.4
13 1474.7
14 1507.8
15 1536.6
16 1555.6
17 1575.2
18 1577.8
19 1583.0
20 1586.6
21 1608.4
22 1619.5
23 1622.4
24 1635.3
25 1636.1
26 1613.9
27 1627.1
28 1653.8
29 1675.6
30 1706.7
31 1732.9
32 1751.0
33 1752.9
34 1769.7
35 1792.1
36 1785.0
37 1787.4
38 1786.9
39 1813.4
40 1822.2
41 1858.7
42 1878.5
43 1901.6
44 1917.0
45 1944.2
46 1957.3
47 1976.0
48 2002.9
49 2019.6
50 2059.5
51 2095.8
52 2134.3
53 2140.2
54 2187.8
55 2212.0
56 2250.0
57 2313.2
58 2347.4
59 2353.5
60 2380.4
61 2390.3
62 2404.2
63 2437.0
64 2449.5
65 2464.6
66 2523.4
67 2562.1
68 2610.3
69 2622.3
70 2651.7
71 2668.6
72 2681.5
73 2702.9
74 2719.5
75 2731.9
76 2755.9
77 2748.4
78 2800.9
79 2826.6
80 2849.1
81 2896.5
82 2935.2
83 2991.2
84 3037.4
85 3108.6
86 3165.5
87 3163.9
88 3175.3
89 3166.0
90 3138.3
91 3149.2
92 3162.2
93 3115.8
94 3142.0
95 3194.4
96 3239.9
97 3274.2
98 3339.6
99 3370.3
100 3405.9
101 3450.3
102 3489.7
103 3509.0
104 3542.5
105 3595.9
106 3616.9
107 3694.2
108 3709.7
109 3739.6
110 3758.5
111 3756.3
112 3793.2
113 3803.3
114 3796.7
115 3710.5
116 3750.3
117 3800.3
118 3821.1
119 3821.1
120 3836.6
121 3807.6
122 3832.2
123 3845.9
124 3875.4
125 3946.1
126 3984.8
127 4063.9
128 4135.7
129 4201.3
130 4237.3
131 4297.9
132 4331.1
133 4388.1
134 4462.5
135 4503.2
136 4588.7
137 4598.8
138 4637.2
139 4686.6
140 4768.5
141 4797.2
142 4789.9
143 4854.0
144 4908.2
145 4920.0
146 5002.2
147 5038.5
148 5078.3
149 5138.1
150 5156.9
151 5180.0
152 5233.7
153 5259.3
154 5300.9
155 5318.4
156 5338.6
157 5297.0
158 5282.0
159 5322.2
160 5342.6
161 5340.2
162 5432.0
163 5464.2
164 5524.6
165 5592.0
166 5614.7
167 5668.6
168 5730.1
169 5781.1
170 5845.5
171 5888.8
172 5936.0
173 5994.6
174 6001.6
175 6050.8
176 6104.9
177 6147.8
178 6204.0
179 6274.2
180 6311.8
181 6363.2
182 6427.3
183 6453.3
184 6563.0
185 6638.1
186 6704.1
187 6819.5
188 6909.9
189 7015.9
190 7085.1
191 7196.6
192 7283.1
193 7385.8
194 7497.8
195 7568.3
196 7642.4
197 7710.0
198 7740.8
199 7770.0
200 7804.2
201 7926.4
202 7953.7
203 7994.1
204 8048.3
205 8076.9
206 8117.7
207 8198.1
208 8308.5
209 8353.7
210 8427.6
211 8465.1
212 8539.1
213 8631.3
214 8700.1
215 8786.2
216 8852.9
217 8874.9
218 8965.8
219 9019.8
220 9073.9
221 9158.3
222 9209.2
223 9244.5
224 9285.2
225 9312.6
226 9289.1
227 9285.8
228 9196.0
229 9076.0
230 9040.9
231 8998.5
232 9050.3
233 9060.2
234 9121.2
235 9186.9
236 9247.1
237 9328.4
238 9376.7
239 9392.7
240 9433.5
241 9482.1
You need at least two points to fit your model. Here's how I'd approach the problem after reading your data into a data.frame named x:
library(forecast)
drifts <- sapply(2:nrow(x), function(zz) rwf(x[1:zz,], drift = TRUE)$model$drift)
I'm not sure if this is what you were expecting or not, but here's a plot of your drift values: