Changing cell values from a raster - r

I have raster cell values with 5 digits, but I need to get rid of the first one, for instance, if the the cell value is "31345" I need to make it "1345".
I am trying to use the calc() function from the raster package to do that by subtracting different numbers from based on the raster cell value (since they are all numbers), like this:
correct.grid <- calc(grid, fun=function(x){ifelse(x < 40000, x-30000,
ifelse(x > 40000 & < 50000, x-40000,
ifelse(x > 50000 & < 60000, x-50000,
ifelse(x > 60000, x-60000, 0)))))})
I guess this is probably a terrible approach to the problem (I am not really good at programming), still, I ran into an error because I am using ">" and "<" inside the function I am guessing. Any ideas on how to make these "ifelse"s to work or maybe a smarter approach to the problem?
This is a piece of the unique values in my data if it helps:
> unique(grid)
[1] 30057 30084 30207 30230 30235 30237 30280 30283 30311 30319 30320 30326 30350 30351 30352 30360
[17] 30384 30396 30415 30420 30447 30449 30452 30456 30478 30481 30497 30507 30522 30560 30562 30605
[33] 30606 30612 30638 30639 30645 30654 30657 30658 30662 30665 30678 30682 30701 30707 30714 30736
[49] 30740 30743 30749 30750 30823 30824 30841 30852 30862 30892 30896 30898 30915 30920 30928 30934
[65] 30956 30962 30978 30986 30998 31021 31022 31031 31042 31053 31055 31081 31085 31092 31097 31099
[81] 31114 31115 31122 31126 31129 31130 31131 31141 31157 31168 31171 40019 40026 40075 40197 40217
[97] 50342 50360 50379 50496 50720 50725 50732 50766 50798 50837 51073 51092 51397 53096 53110 53117
[113] 53118 53120 53124 60003 60005 60041 60485 60516 60655 60661 60825 61039 61174 61185 61187 61210
[129] 61221 61224 61227 61259 61287 61289 61290 61295

If you just want to remove the leftmost digit of each value, how about this:
First, let's load a raster object to work with:
library(raster)
# Load a raster object to work with
grid = system.file("external/test.grd", package="raster")
grid = raster(grid)
# Set up values to be whole numbers
values(grid) = round(values(grid)*100)
Now, remove the leftmost digit from every value in the raster:
values(grid) = as.numeric(substr(values(grid), 2, nchar(values(grid))))
Note that a value with one or more zeros after the leftmost digit will get shortened by more than one digit. For example, 60661 will become 661 and 30001 will become 1.

Related

How to rescale values from [-2, 2] to [0, 100] in R? [duplicate]

How do I scale a series such that the first number in the series is 0 and last number is 1. I looked into 'approx', 'scale' but they do not achieve this objective.
# generate series from exponential distr
s = sort(rexp(100))
# scale/interpolate 's' such that it starts at 0 and ends at 1?
# approx(s)
# scale(s)
The scales package has a function that will do this for you: rescale.
library("scales")
rescale(s)
By default, this scales the given range of s onto 0 to 1, but either or both of those can be adjusted. For example, if you wanted it scaled from 0 to 10,
rescale(s, to=c(0,10))
or if you wanted the largest value of s scaled to 1, but 0 (instead of the smallest value of s) scaled to 0, you could use
rescale(s, from=c(0, max(s)))
It's straight-forward to create a small function to do this using basic arithmetic:
s = sort(rexp(100))
range01 <- function(x){(x-min(x))/(max(x)-min(x))}
range01(s)
[1] 0.000000000 0.003338782 0.007572326 0.012192201 0.016055006 0.017161145
[7] 0.019949532 0.023839810 0.024421602 0.027197168 0.029889484 0.033039408
[13] 0.033783376 0.038051265 0.045183382 0.049560233 0.056941611 0.057552543
[19] 0.062674982 0.066001242 0.066420884 0.067689067 0.069247825 0.069432174
[25] 0.070136067 0.076340460 0.078709590 0.080393512 0.085591881 0.087540132
[31] 0.090517295 0.091026499 0.091251213 0.099218526 0.103236344 0.105724733
[37] 0.107495340 0.113332392 0.116103438 0.124050331 0.125596034 0.126599323
[43] 0.127154661 0.133392300 0.134258532 0.138253452 0.141933433 0.146748798
[49] 0.147490227 0.149960293 0.153126478 0.154275371 0.167701855 0.170160948
[55] 0.180313542 0.181834891 0.182554291 0.189188137 0.193807559 0.195903010
[61] 0.208902645 0.211308713 0.232942314 0.236135220 0.251950116 0.260816843
[67] 0.284090255 0.284150541 0.288498370 0.295515143 0.299408623 0.301264703
[73] 0.306817872 0.307853369 0.324882091 0.353241217 0.366800517 0.389474449
[79] 0.398838576 0.404266315 0.408936260 0.409198619 0.415165553 0.433960390
[85] 0.440690262 0.458692639 0.464027428 0.474214070 0.517224262 0.538532221
[91] 0.544911543 0.559945121 0.585390414 0.647030109 0.694095422 0.708385079
[97] 0.736486707 0.787250428 0.870874773 1.000000000
Alternatively:
scale(x,center=min(x),scale=diff(range(x)))
(untested)
This has the feature that it attaches the original centering and scaling factors to the output as attributes, so they can be retrieved and used to un-scale the data later (if desired). It has the oddity that it always returns the result as a (columnwise) matrix, even if it was passed a vector; you can use drop(scale(...)) if you want a vector instead of a matrix (this usually doesn't matter but the matrix format can occasionally cause trouble downstream ... in my experience more often with tibbles/in tidyverse, although I haven't stopped to examine exactly what's going wrong in these cases).
This should do it:
reshape::rescaler.default(s, type = "range")
EDIT
I was curious about the performance of the two methods
> system.time(replicate(100, range01(s)))
user system elapsed
0.56 0.12 0.69
> system.time(replicate(100, reshape::rescaler.default(s, type = "range")))
user system elapsed
0.53 0.18 0.70
Extracting the raw code from reshape::rescaler.default
range02 <- function(x) {
(x - min(x, na.rm=TRUE)) / diff(range(x, na.rm=TRUE))
}
> system.time(replicate(100, range02(s)))
user system elapsed
0.56 0.12 0.68
Yields similar result.
You can also make use of the caret package which will provide you the preProcess function which is just simple like this:
preProcValues <- preProcess(yourData, method = "range")
dataScaled <- predict(preProcValues, yourData)
More details on the package help.
I created following function in r:
ReScale <- function(x,first,last){(last-first)/(max(x)-min(x))*(x-min(x))+first}
Here, first is start point, last is end point.

R: Number precision, how to prevent rounding?

In R, I have the following vector of numbers:
numbers <- c(0.0193738397702257, 0.0206218006695066, 0.021931558829559,
0.023301378178208, 0.024728095594751, 0.0262069239112787, 0.0277310799996657,
0.0292913948762414, 0.0308758879014822, 0.0324693108459748, 0.0340526658271053,
0.03560271425176, 0.0370915716288017, 0.0384863653635563, 0.0397490272396821,
0.0408363289939899, 0.0417002577578561, 0.0422890917131629, 0.0425479537267193,
0.0424213884467212, 0.0418571402964338, 0.0408094991140723, 0.039243951482081,
0.0371450856007627, 0.0345208537496488, 0.0314091884865658, 0.0278854381969885,
0.0240607638577763, 0.0200808932436969, 0.0161193801903312, 0.0123615428382314,
0.00920410652651576, 0.00628125319205829, 0.0038816517651031,
0.00214210795679701, 0.00103919307280354, 0.000435532895812429,
0.000154730641092234, 4.56593150728962e-05, 1.09540661898799e-05,
2.08952167815574e-06, 3.10045314287095e-07, 3.51923218134997e-08,
3.02121734299694e-09, 1.95269500257237e-10, 9.54697530552714e-12,
3.5914029230041e-13, 1.07379981978647e-14, 2.68543048763588e-16,
6.03891613157815e-18, 1.33875697089866e-19, 3.73885699170518e-21,
1.30142752487978e-22, 5.58607581840324e-24, 2.92551478380617e-25,
1.85002124085815e-26, 1.39826890505611e-27, 1.25058972437096e-28,
1.31082961467944e-29, 1.59522437605631e-30, 2.23371981458205e-31,
3.5678974253211e-32, 6.44735482309705e-33, 1.30771083084868e-33,
2.95492180915218e-34, 7.3857554006177e-35, 2.02831084124162e-35,
6.08139499028838e-36, 1.97878175996974e-36, 6.94814886769478e-37,
2.61888070029751e-37, 1.05433608968287e-37, 4.51270543356897e-38,
2.04454840598946e-38, 9.76544451781597e-39, 4.90105271869773e-39,
2.5743371658684e-39, 1.41165292292001e-39, 8.06250933233367e-40,
4.78746160076622e-40, 2.94835809615626e-40, 1.87667170875529e-40,
1.22833908072915e-40, 8.21091993733535e-41, 5.53869254991177e-41,
3.74485710867631e-41, 2.52485401054841e-41, 1.69027430542613e-41,
1.12176290106797e-41, 7.38294520887852e-42, 4.8381070000246e-42,
3.20123319815522e-42, 2.16493953538386e-42, 1.50891804884267e-42,
1.09057070511506e-42, 8.1903023226717e-43, 6.3480235351625e-43,
5.13533594742621e-43, 4.25591269645348e-43, 3.57422485839717e-43,
3.0293235331048e-43, 2.58514651313175e-43, 2.21952686649801e-43,
1.91634521841049e-43, 1.66319240529025e-43, 1.45043336371471e-43,
1.27052593975384e-43, 1.11752052211757e-43, 9.86689196888877e-44,
8.74248543892126e-44)
I use cumsum to get the cumulative sum. Due to R's numerical precision, many of the numbers towards the end of the vector are now equivalent to 1 (even though technically they're not exactly = 1, just very close to it).
So then when I try to recover my original numbers by using diff(cumulative), I get a lot of 0s instead of a very small number. How can I prevent R from "rounding"?
cumulative <- cumsum(numbers)
diff(cumulative)
I think the Rmpfr package does what you want:
library(Rmpfr)
x <- mpfr(numbers,200) # set arbitrary precision that's greater than R default
cumulative <- cumsum(x)
diff(cumulative)
Here's the top and bottom of the output:
> diff(cumulative)
109 'mpfr' numbers of precision 200 bits
[1] 0.02062180066950659862445860426305443979799747467041015625
[2] 0.021931558829559001655429284483034280128777027130126953125
[3] 0.02330137817820800150148130569505156017839908599853515625
[4] 0.0247280955947510004688805196337852976284921169281005859375
...
[107] 1.117520522117570086014450710640040701536080790307716261438975e-43
[108] 9.866891968888769759087690539062888824928577731689952701181586e-44
[109] 8.742485438921260418707338389502002282130643811990663213422948e-44
You can adjust the precision as you like by changing the second argument to mpfr.
You might want to try out the package Rmpfr.

define a variable in a for-loop in R

I have a sequence of numbers in a range from 65-60 . In each step we diminish 1/12
[1] 65.00000 64.91667 64.83333 64.75000 64.66667 64.58333 64.50000 64.41667 64.33333 64.25000 64.16667
[12] 64.08333 64.00000 63.91667 63.83333 63.75000 63.66667 63.58333 63.50000 63.41667 63.33333 63.25000
[23] 63.16667 63.08333 63.00000 62.91667 62.83333 62.75000 62.66667 62.58333 62.50000 62.41667 62.33333
[34] 62.25000 62.16667 62.08333 62.00000 61.91667 61.83333 61.75000 61.66667 61.58333 61.50000 61.41667
[45] 61.33333 61.25000 61.16667 61.08333 61.00000 60.91667 60.83333 60.75000 60.66667 60.58333 60.50000
[56] 60.41667 60.33333 60.25000 60.16667 60.08333 60.00000
For every step obove we will compute a function.
So a for-loop is created with the following code:
a <- seq(65, 60, (-1/12))
v <- a*0
w <- a*0
mu10<-0.0067
mu12<-0.5
mu20<-1.03*mu10
mu21<-3
r<-log(1+0.01)
b1<-(-1500)
b2<-25000*0.15*12
c10<-45000*10
c20<-45000*10
h<-1/12
v[1] <- h*(b1+mu10*c10+mu12)
w[1] <- h*(b2+mu20*c20+mu21)
for (t in 2:length(a)){
v[t] <- v[t-1]+h*(-r*v[t-1]+b1+mu10*(c10-v[t-1])+mu12*(w[t-1]-v[t-1]))
w[t] <- w[t-1]+h*(-r*w[t-1]+b2+mu20*(c20-w[t-1])+mu21*(v[t-1]-w[t-1]))
}
What we wanted to do here was to calculate the value of the function at age 60 by calculating the feature back in time. So simply to compute v[60] and w[60]
This code I have no problem with and it does exactly the way I want it to do.
the problem is variable mu10 I defined above as mu10<-0.0067
It should however be defined as: mu10(t)=alpha+beta*exp(gamma * t)
Where t should be al the sequences from 65.00000 64.91667 64.83333 64.75000......60.00000
I want to replace my mu10 with mu10(t), the formula I defined here. mu10(t) shall then replace mu10 in the for-loop.
I want them to look like this:
v[t] <- v[t-1]+h*(-r*v[t-1]+b1+mu10[t]*(c10-v[t-1])+mu12*(w[t-1]-v[t-1]))
t on the left hand side should be the same as t in mu10(t) , perhaps obvious but just wanted to make clear.
I have tried a bit up and down but feels that it is not right. I have defined parameters as:
gamma<-0.044
alpha<-(-0.0073)
beta<-0.0009
I simply need help to calculate mu10(t) in a smooth manner so it can be included in the for-loop.

r-find two closest values in a vector

I tried to find two values in the following vector, which are close to 10. The expected value is 10.12099196 and 10.63054170. Your inputs would be appreciated.
[1] 0.98799517 1.09055728 1.20383713 1.32927166 1.46857509 1.62380423 1.79743107 1.99241551 2.21226576 2.46106916 2.74346924 3.06455219 3.42958354 3.84350238 4.31005838
[16] 4.83051356 5.40199462 6.01590035 6.65715769 7.30532785 7.93823621 8.53773241 9.09570538 9.61755743 10.12099196 10.63018180 11.16783243 11.74870531 12.37719092 13.04922392
[31] 13.75661322 14.49087793 15.24414627 16.00601247 16.75709565 17.46236358 18.06882072 18.51050094 18.71908344 18.63563523 18.22123225 17.46709279 16.40246292 15.09417699 13.63404124
[46] 12.11854915 10.63054170 9.22947285 7.95056000 6.80923943 5.80717982 4.93764782 4.18947450 3.54966795 3.00499094 2.54283599 2.15165780 1.82114213 1.54222565 1.30703661
[61] 1.10879707 0.94170986 0.80084308 0.68201911 0.58171175 0.49695298 0.42525021 0.36451350 0.31299262 0.26922281 0.23197860 0.20023468 0.17313291 0.14995459 0.13009730
[76] 0.11305559 0.09840485 0.08578789 0.07490387 0.06549894 0.05735864
Another alternative could be allowing the user to control for the "tolerance" in order to set what "closeness" is, this can be done by using a simple function:
close <- function(x, value, tol=NULL){
if(!is.null(tol)){
x[abs(x-10) <= tol]
} else {
x[order(abs(x-10))]
}
}
Where x is a vector of values, value is the value of comparison for closeness, and tol is logical, if it's NULL it returns all the "close" values ordered by "closeness" to value, otherwise it returns just the values meeting the condition given in tol.
> close(x, value=10, tol=.7)
[1] 9.617557 10.120992 10.630182 10.630542
> close(x, value=10)
[1] 10.12099196 9.61755743 10.63018180 10.63054170 9.22947285 9.09570538 11.16783243
[8] 8.53773241 11.74870531 7.95056000 7.93823621 12.11854915 12.37719092 7.30532785
[15] 13.04922392 6.80923943 6.65715769 13.63404124 13.75661322 6.01590035 5.80717982
[22] 14.49087793 5.40199462 4.93764782 15.09417699 4.83051356 15.24414627 4.31005838
[29] 4.18947450 16.00601247 3.84350238 16.40246292 3.54966795 3.42958354 16.75709565
[36] 3.06455219 3.00499094 2.74346924 2.54283599 17.46236358 17.46709279 2.46106916
[43] 2.21226576 2.15165780 1.99241551 18.06882072 1.82114213 1.79743107 18.22123225
[50] 1.62380423 1.54222565 18.51050094 1.46857509 18.63563523 1.32927166 1.30703661
[57] 18.71908344 1.20383713 1.10879707 1.09055728 0.98799517 0.94170986 0.80084308
[64] 0.68201911 0.58171175 0.49695298 0.42525021 0.36451350 0.31299262 0.26922281
[71] 0.23197860 0.20023468 0.17313291 0.14995459 0.13009730 0.11305559 0.09840485
[78] 0.08578789 0.07490387 0.06549894 0.05735864
In the first example I defined "closeness" to be at most a difference of 0.7 between value and each elements in x. In the second example the function close returns a vector of values where the firsts are the closest to the value given in value and the lasts are the farest values from value.
Since my solution does not provide an easy (practical) way to find tol as #Arun pointed out, one way to find the closest values would be seting tol=NULL and asking for the exact number of close values as in:
> close(x, value=10)[1:3]
[1] 10.120992 9.617557 10.630182
This shows the three values in x closest to 10.
I can't think of a way without using sort. However, you can speed it up by using partial sort.
x[abs(x-10) %in% sort(abs(x-10), partial=1:2)[1:2]]
# [1]  9.617557 10.120992
In case the same values are present more than once, you'll get all of them here. So, you can either wrap this with unique or you can use match instead as follows:
x[match(sort(abs(x-10), partial=1:2)[1:2], abs(x-10))]
# [1] 10.120992 9.617557
dput output:
dput(x)
c(0.98799517, 1.09055728, 1.20383713, 1.32927166, 1.46857509,
1.62380423, 1.79743107, 1.99241551, 2.21226576, 2.46106916, 2.74346924,
3.06455219, 3.42958354, 3.84350238, 4.31005838, 4.83051356, 5.40199462,
6.01590035, 6.65715769, 7.30532785, 7.93823621, 8.53773241, 9.09570538,
9.61755743, 10.12099196, 10.6301818, 11.16783243, 11.74870531,
12.37719092, 13.04922392, 13.75661322, 14.49087793, 15.24414627,
16.00601247, 16.75709565, 17.46236358, 18.06882072, 18.51050094,
18.71908344, 18.63563523, 18.22123225, 17.46709279, 16.40246292,
15.09417699, 13.63404124, 12.11854915, 10.6305417, 9.22947285,
7.95056, 6.80923943, 5.80717982, 4.93764782, 4.1894745, 3.54966795,
3.00499094, 2.54283599, 2.1516578, 1.82114213, 1.54222565, 1.30703661,
1.10879707, 0.94170986, 0.80084308, 0.68201911, 0.58171175, 0.49695298,
0.42525021, 0.3645135, 0.31299262, 0.26922281, 0.2319786, 0.20023468,
0.17313291, 0.14995459, 0.1300973, 0.11305559, 0.09840485, 0.08578789,
0.07490387, 0.06549894, 0.05735864)
I'm not sure your question is clear, so here's another approach. To find the value closest to your first desired value, 10.12099196 , subtract that from the vector, take the absolute value, and then find the index of the closest element. Explicit:
delx <- abs( 10.12099196 - x)
min.index <- which.min(delx) #returns index of first minimum if there are duplicates
x[min.index] #gets you the value itself
Apologies if this was not the intent of your question.

Scale a series between two points

How do I scale a series such that the first number in the series is 0 and last number is 1. I looked into 'approx', 'scale' but they do not achieve this objective.
# generate series from exponential distr
s = sort(rexp(100))
# scale/interpolate 's' such that it starts at 0 and ends at 1?
# approx(s)
# scale(s)
The scales package has a function that will do this for you: rescale.
library("scales")
rescale(s)
By default, this scales the given range of s onto 0 to 1, but either or both of those can be adjusted. For example, if you wanted it scaled from 0 to 10,
rescale(s, to=c(0,10))
or if you wanted the largest value of s scaled to 1, but 0 (instead of the smallest value of s) scaled to 0, you could use
rescale(s, from=c(0, max(s)))
It's straight-forward to create a small function to do this using basic arithmetic:
s = sort(rexp(100))
range01 <- function(x){(x-min(x))/(max(x)-min(x))}
range01(s)
[1] 0.000000000 0.003338782 0.007572326 0.012192201 0.016055006 0.017161145
[7] 0.019949532 0.023839810 0.024421602 0.027197168 0.029889484 0.033039408
[13] 0.033783376 0.038051265 0.045183382 0.049560233 0.056941611 0.057552543
[19] 0.062674982 0.066001242 0.066420884 0.067689067 0.069247825 0.069432174
[25] 0.070136067 0.076340460 0.078709590 0.080393512 0.085591881 0.087540132
[31] 0.090517295 0.091026499 0.091251213 0.099218526 0.103236344 0.105724733
[37] 0.107495340 0.113332392 0.116103438 0.124050331 0.125596034 0.126599323
[43] 0.127154661 0.133392300 0.134258532 0.138253452 0.141933433 0.146748798
[49] 0.147490227 0.149960293 0.153126478 0.154275371 0.167701855 0.170160948
[55] 0.180313542 0.181834891 0.182554291 0.189188137 0.193807559 0.195903010
[61] 0.208902645 0.211308713 0.232942314 0.236135220 0.251950116 0.260816843
[67] 0.284090255 0.284150541 0.288498370 0.295515143 0.299408623 0.301264703
[73] 0.306817872 0.307853369 0.324882091 0.353241217 0.366800517 0.389474449
[79] 0.398838576 0.404266315 0.408936260 0.409198619 0.415165553 0.433960390
[85] 0.440690262 0.458692639 0.464027428 0.474214070 0.517224262 0.538532221
[91] 0.544911543 0.559945121 0.585390414 0.647030109 0.694095422 0.708385079
[97] 0.736486707 0.787250428 0.870874773 1.000000000
Alternatively:
scale(x,center=min(x),scale=diff(range(x)))
(untested)
This has the feature that it attaches the original centering and scaling factors to the output as attributes, so they can be retrieved and used to un-scale the data later (if desired). It has the oddity that it always returns the result as a (columnwise) matrix, even if it was passed a vector; you can use drop(scale(...)) if you want a vector instead of a matrix (this usually doesn't matter but the matrix format can occasionally cause trouble downstream ... in my experience more often with tibbles/in tidyverse, although I haven't stopped to examine exactly what's going wrong in these cases).
This should do it:
reshape::rescaler.default(s, type = "range")
EDIT
I was curious about the performance of the two methods
> system.time(replicate(100, range01(s)))
user system elapsed
0.56 0.12 0.69
> system.time(replicate(100, reshape::rescaler.default(s, type = "range")))
user system elapsed
0.53 0.18 0.70
Extracting the raw code from reshape::rescaler.default
range02 <- function(x) {
(x - min(x, na.rm=TRUE)) / diff(range(x, na.rm=TRUE))
}
> system.time(replicate(100, range02(s)))
user system elapsed
0.56 0.12 0.68
Yields similar result.
You can also make use of the caret package which will provide you the preProcess function which is just simple like this:
preProcValues <- preProcess(yourData, method = "range")
dataScaled <- predict(preProcValues, yourData)
More details on the package help.
I created following function in r:
ReScale <- function(x,first,last){(last-first)/(max(x)-min(x))*(x-min(x))+first}
Here, first is start point, last is end point.

Resources