How do I scale a series such that the first number in the series is 0 and last number is 1. I looked into 'approx', 'scale' but they do not achieve this objective.
# generate series from exponential distr
s = sort(rexp(100))
# scale/interpolate 's' such that it starts at 0 and ends at 1?
# approx(s)
# scale(s)
The scales package has a function that will do this for you: rescale.
library("scales")
rescale(s)
By default, this scales the given range of s onto 0 to 1, but either or both of those can be adjusted. For example, if you wanted it scaled from 0 to 10,
rescale(s, to=c(0,10))
or if you wanted the largest value of s scaled to 1, but 0 (instead of the smallest value of s) scaled to 0, you could use
rescale(s, from=c(0, max(s)))
It's straight-forward to create a small function to do this using basic arithmetic:
s = sort(rexp(100))
range01 <- function(x){(x-min(x))/(max(x)-min(x))}
range01(s)
[1] 0.000000000 0.003338782 0.007572326 0.012192201 0.016055006 0.017161145
[7] 0.019949532 0.023839810 0.024421602 0.027197168 0.029889484 0.033039408
[13] 0.033783376 0.038051265 0.045183382 0.049560233 0.056941611 0.057552543
[19] 0.062674982 0.066001242 0.066420884 0.067689067 0.069247825 0.069432174
[25] 0.070136067 0.076340460 0.078709590 0.080393512 0.085591881 0.087540132
[31] 0.090517295 0.091026499 0.091251213 0.099218526 0.103236344 0.105724733
[37] 0.107495340 0.113332392 0.116103438 0.124050331 0.125596034 0.126599323
[43] 0.127154661 0.133392300 0.134258532 0.138253452 0.141933433 0.146748798
[49] 0.147490227 0.149960293 0.153126478 0.154275371 0.167701855 0.170160948
[55] 0.180313542 0.181834891 0.182554291 0.189188137 0.193807559 0.195903010
[61] 0.208902645 0.211308713 0.232942314 0.236135220 0.251950116 0.260816843
[67] 0.284090255 0.284150541 0.288498370 0.295515143 0.299408623 0.301264703
[73] 0.306817872 0.307853369 0.324882091 0.353241217 0.366800517 0.389474449
[79] 0.398838576 0.404266315 0.408936260 0.409198619 0.415165553 0.433960390
[85] 0.440690262 0.458692639 0.464027428 0.474214070 0.517224262 0.538532221
[91] 0.544911543 0.559945121 0.585390414 0.647030109 0.694095422 0.708385079
[97] 0.736486707 0.787250428 0.870874773 1.000000000
Alternatively:
scale(x,center=min(x),scale=diff(range(x)))
(untested)
This has the feature that it attaches the original centering and scaling factors to the output as attributes, so they can be retrieved and used to un-scale the data later (if desired). It has the oddity that it always returns the result as a (columnwise) matrix, even if it was passed a vector; you can use drop(scale(...)) if you want a vector instead of a matrix (this usually doesn't matter but the matrix format can occasionally cause trouble downstream ... in my experience more often with tibbles/in tidyverse, although I haven't stopped to examine exactly what's going wrong in these cases).
This should do it:
reshape::rescaler.default(s, type = "range")
EDIT
I was curious about the performance of the two methods
> system.time(replicate(100, range01(s)))
user system elapsed
0.56 0.12 0.69
> system.time(replicate(100, reshape::rescaler.default(s, type = "range")))
user system elapsed
0.53 0.18 0.70
Extracting the raw code from reshape::rescaler.default
range02 <- function(x) {
(x - min(x, na.rm=TRUE)) / diff(range(x, na.rm=TRUE))
}
> system.time(replicate(100, range02(s)))
user system elapsed
0.56 0.12 0.68
Yields similar result.
You can also make use of the caret package which will provide you the preProcess function which is just simple like this:
preProcValues <- preProcess(yourData, method = "range")
dataScaled <- predict(preProcValues, yourData)
More details on the package help.
I created following function in r:
ReScale <- function(x,first,last){(last-first)/(max(x)-min(x))*(x-min(x))+first}
Here, first is start point, last is end point.
Related
How do I scale a series such that the first number in the series is 0 and last number is 1. I looked into 'approx', 'scale' but they do not achieve this objective.
# generate series from exponential distr
s = sort(rexp(100))
# scale/interpolate 's' such that it starts at 0 and ends at 1?
# approx(s)
# scale(s)
The scales package has a function that will do this for you: rescale.
library("scales")
rescale(s)
By default, this scales the given range of s onto 0 to 1, but either or both of those can be adjusted. For example, if you wanted it scaled from 0 to 10,
rescale(s, to=c(0,10))
or if you wanted the largest value of s scaled to 1, but 0 (instead of the smallest value of s) scaled to 0, you could use
rescale(s, from=c(0, max(s)))
It's straight-forward to create a small function to do this using basic arithmetic:
s = sort(rexp(100))
range01 <- function(x){(x-min(x))/(max(x)-min(x))}
range01(s)
[1] 0.000000000 0.003338782 0.007572326 0.012192201 0.016055006 0.017161145
[7] 0.019949532 0.023839810 0.024421602 0.027197168 0.029889484 0.033039408
[13] 0.033783376 0.038051265 0.045183382 0.049560233 0.056941611 0.057552543
[19] 0.062674982 0.066001242 0.066420884 0.067689067 0.069247825 0.069432174
[25] 0.070136067 0.076340460 0.078709590 0.080393512 0.085591881 0.087540132
[31] 0.090517295 0.091026499 0.091251213 0.099218526 0.103236344 0.105724733
[37] 0.107495340 0.113332392 0.116103438 0.124050331 0.125596034 0.126599323
[43] 0.127154661 0.133392300 0.134258532 0.138253452 0.141933433 0.146748798
[49] 0.147490227 0.149960293 0.153126478 0.154275371 0.167701855 0.170160948
[55] 0.180313542 0.181834891 0.182554291 0.189188137 0.193807559 0.195903010
[61] 0.208902645 0.211308713 0.232942314 0.236135220 0.251950116 0.260816843
[67] 0.284090255 0.284150541 0.288498370 0.295515143 0.299408623 0.301264703
[73] 0.306817872 0.307853369 0.324882091 0.353241217 0.366800517 0.389474449
[79] 0.398838576 0.404266315 0.408936260 0.409198619 0.415165553 0.433960390
[85] 0.440690262 0.458692639 0.464027428 0.474214070 0.517224262 0.538532221
[91] 0.544911543 0.559945121 0.585390414 0.647030109 0.694095422 0.708385079
[97] 0.736486707 0.787250428 0.870874773 1.000000000
Alternatively:
scale(x,center=min(x),scale=diff(range(x)))
(untested)
This has the feature that it attaches the original centering and scaling factors to the output as attributes, so they can be retrieved and used to un-scale the data later (if desired). It has the oddity that it always returns the result as a (columnwise) matrix, even if it was passed a vector; you can use drop(scale(...)) if you want a vector instead of a matrix (this usually doesn't matter but the matrix format can occasionally cause trouble downstream ... in my experience more often with tibbles/in tidyverse, although I haven't stopped to examine exactly what's going wrong in these cases).
This should do it:
reshape::rescaler.default(s, type = "range")
EDIT
I was curious about the performance of the two methods
> system.time(replicate(100, range01(s)))
user system elapsed
0.56 0.12 0.69
> system.time(replicate(100, reshape::rescaler.default(s, type = "range")))
user system elapsed
0.53 0.18 0.70
Extracting the raw code from reshape::rescaler.default
range02 <- function(x) {
(x - min(x, na.rm=TRUE)) / diff(range(x, na.rm=TRUE))
}
> system.time(replicate(100, range02(s)))
user system elapsed
0.56 0.12 0.68
Yields similar result.
You can also make use of the caret package which will provide you the preProcess function which is just simple like this:
preProcValues <- preProcess(yourData, method = "range")
dataScaled <- predict(preProcValues, yourData)
More details on the package help.
I created following function in r:
ReScale <- function(x,first,last){(last-first)/(max(x)-min(x))*(x-min(x))+first}
Here, first is start point, last is end point.
I generated a series of 10,000 random numbers through:
rand_x = rf(10000, 3, 5)
Now I want to produce another series that contains the variances at each point i.e. the column look like this:
[variance(first two numbers)]
[variance(first three numbers)]
[variance(first four numbers)]
[variance(first five numbers)]
.
.
.
.
[variance of 10,000 numbers]
I have written the code as:
c ( var(rand_x[1:1]) : var(rand_x[1:10000])
but I am only getting 157 elements in the column rather than not 10,000. Can someone guide what I am doing wrong here?
An option is to loop over the index from 2 to 10000 in sapply, extract the elements of 'rand_x' from position 1 to the looped index, apply the var and return a vector of variance output
out <- sapply(2:10000, function(i) var(rand_x[1:i]))
Your code creates a sequence incrementing by one with the variance of the first two elements as start value and the variance of the whole vector as limit.
var(rand_x[1:2]):var(rand_x[1:n])
# [1] 0.9026262 1.9026262 2.9026262
## compare:
.9026262:3.33433
# [1] 0.9026262 1.9026262 2.9026262
What you want is to loop over the vector indices, using seq_along to get the variances of sequences growing by one. To see what needs to be done, I show you first a (rather slow) for loop.
vars <- numeric() ## initialize numeric vector
for (i in seq_along(rand_x)) {
vars[i] <- var(rand_x[1:i])
}
vars
# [1] NA 0.9026262 1.4786540 1.2771584 1.7877717 1.6095619
# [7] 1.4483273 1.5653797 1.8121144 1.6192175 1.4821020 3.5005254
# [13] 3.3771453 3.1723564 2.9464537 2.7620001 2.7086317 2.5757641
# [19] 2.4330738 2.4073546 2.4242747 2.3149455 2.3192964 2.2544765
# [25] 3.1333738 3.0343781 3.0354998 2.9230927 2.8226541 2.7258979
# [31] 2.6775278 2.6651541 2.5995346 3.1333880 3.0487177 3.0392603
# [37] 3.0483917 4.0446074 4.0463367 4.0465158 3.9473870 3.8537925
# [43] 3.8461463 3.7848464 3.7505158 3.7048694 3.6953796 3.6605357
# [49] 3.6720684 3.6580296
The first element has to be NA because the variance of one element is not defined (division by zero).
However, the for loop is slow. Since R is vectorized we rather want to use a function from the *apply family, e.g. vapply, which is much faster. In vapply we initialize with numeric(1) (or just 0) because the result of each iteration is of length one.
vars <- vapply(seq_along(rand_x), function(i) var(rand_x[1:i]), numeric(1))
vars
# [1] NA 0.9026262 1.4786540 1.2771584 1.7877717 1.6095619
# [7] 1.4483273 1.5653797 1.8121144 1.6192175 1.4821020 3.5005254
# [13] 3.3771453 3.1723564 2.9464537 2.7620001 2.7086317 2.5757641
# [19] 2.4330738 2.4073546 2.4242747 2.3149455 2.3192964 2.2544765
# [25] 3.1333738 3.0343781 3.0354998 2.9230927 2.8226541 2.7258979
# [31] 2.6775278 2.6651541 2.5995346 3.1333880 3.0487177 3.0392603
# [37] 3.0483917 4.0446074 4.0463367 4.0465158 3.9473870 3.8537925
# [43] 3.8461463 3.7848464 3.7505158 3.7048694 3.6953796 3.6605357
# [49] 3.6720684 3.6580296
Data:
n <- 50
set.seed(42)
rand_x <- rf(n, 3, 5)
It seems a silly question, but I have searched on line, but still did not find any sufficient reply.
My question is: suppose we have a matrix M, then we use the scale() function, how can we extract the center and scale of each column by writing a line of code (I know we can see the centers and scales..), but my matrix has lots of columns, it is cumbersome to do it manually.
Any ideas? Many thanks!
you are looking for the attributes function:
set.seed(1)
mat = matrix(rnorm(1000),,10) # Suppose you have 10 columns
s = scale(mat) # scale your data
attributes(s)#This gives you the means and the standard deviations:
$`dim`
[1] 100 10
$`scaled:center`
[1] 0.1088873669 -0.0378080766 0.0296735350 0.0516018586 -0.0391342406 -0.0445193567 -0.1995797418
[8] 0.0002549694 0.0100772648 0.0040650015
$`scaled:scale`
[1] 0.8981994 0.9578791 1.0342655 0.9916751 1.1696122 0.9661804 1.0808358 1.0973012 1.0883612 1.0548091
These values can also be obtained as:
colMeans(mat)
[1] 0.1088873669 -0.0378080766 0.0296735350 0.0516018586 -0.0391342406 -0.0445193567 -0.1995797418
[8] 0.0002549694 0.0100772648 0.0040650015
sqrt(diag(var(mat)))
[1] 0.8981994 0.9578791 1.0342655 0.9916751 1.1696122 0.9661804 1.0808358 1.0973012 1.0883612 1.0548091
you get a list that you can subset the way you want:
or you can do
attr(s,"scaled:center")
[1] 0.1088873669 -0.0378080766 0.0296735350 0.0516018586 -0.0391342406 -0.0445193567 -0.1995797418
[8] 0.0002549694 0.0100772648 0.0040650015
attr(s,"scaled:scale")
[1] 0.8981994 0.9578791 1.0342655 0.9916751 1.1696122 0.9661804 1.0808358 1.0973012 1.0883612 1.0548091
I have raster cell values with 5 digits, but I need to get rid of the first one, for instance, if the the cell value is "31345" I need to make it "1345".
I am trying to use the calc() function from the raster package to do that by subtracting different numbers from based on the raster cell value (since they are all numbers), like this:
correct.grid <- calc(grid, fun=function(x){ifelse(x < 40000, x-30000,
ifelse(x > 40000 & < 50000, x-40000,
ifelse(x > 50000 & < 60000, x-50000,
ifelse(x > 60000, x-60000, 0)))))})
I guess this is probably a terrible approach to the problem (I am not really good at programming), still, I ran into an error because I am using ">" and "<" inside the function I am guessing. Any ideas on how to make these "ifelse"s to work or maybe a smarter approach to the problem?
This is a piece of the unique values in my data if it helps:
> unique(grid)
[1] 30057 30084 30207 30230 30235 30237 30280 30283 30311 30319 30320 30326 30350 30351 30352 30360
[17] 30384 30396 30415 30420 30447 30449 30452 30456 30478 30481 30497 30507 30522 30560 30562 30605
[33] 30606 30612 30638 30639 30645 30654 30657 30658 30662 30665 30678 30682 30701 30707 30714 30736
[49] 30740 30743 30749 30750 30823 30824 30841 30852 30862 30892 30896 30898 30915 30920 30928 30934
[65] 30956 30962 30978 30986 30998 31021 31022 31031 31042 31053 31055 31081 31085 31092 31097 31099
[81] 31114 31115 31122 31126 31129 31130 31131 31141 31157 31168 31171 40019 40026 40075 40197 40217
[97] 50342 50360 50379 50496 50720 50725 50732 50766 50798 50837 51073 51092 51397 53096 53110 53117
[113] 53118 53120 53124 60003 60005 60041 60485 60516 60655 60661 60825 61039 61174 61185 61187 61210
[129] 61221 61224 61227 61259 61287 61289 61290 61295
If you just want to remove the leftmost digit of each value, how about this:
First, let's load a raster object to work with:
library(raster)
# Load a raster object to work with
grid = system.file("external/test.grd", package="raster")
grid = raster(grid)
# Set up values to be whole numbers
values(grid) = round(values(grid)*100)
Now, remove the leftmost digit from every value in the raster:
values(grid) = as.numeric(substr(values(grid), 2, nchar(values(grid))))
Note that a value with one or more zeros after the leftmost digit will get shortened by more than one digit. For example, 60661 will become 661 and 30001 will become 1.
I tried to find two values in the following vector, which are close to 10. The expected value is 10.12099196 and 10.63054170. Your inputs would be appreciated.
[1] 0.98799517 1.09055728 1.20383713 1.32927166 1.46857509 1.62380423 1.79743107 1.99241551 2.21226576 2.46106916 2.74346924 3.06455219 3.42958354 3.84350238 4.31005838
[16] 4.83051356 5.40199462 6.01590035 6.65715769 7.30532785 7.93823621 8.53773241 9.09570538 9.61755743 10.12099196 10.63018180 11.16783243 11.74870531 12.37719092 13.04922392
[31] 13.75661322 14.49087793 15.24414627 16.00601247 16.75709565 17.46236358 18.06882072 18.51050094 18.71908344 18.63563523 18.22123225 17.46709279 16.40246292 15.09417699 13.63404124
[46] 12.11854915 10.63054170 9.22947285 7.95056000 6.80923943 5.80717982 4.93764782 4.18947450 3.54966795 3.00499094 2.54283599 2.15165780 1.82114213 1.54222565 1.30703661
[61] 1.10879707 0.94170986 0.80084308 0.68201911 0.58171175 0.49695298 0.42525021 0.36451350 0.31299262 0.26922281 0.23197860 0.20023468 0.17313291 0.14995459 0.13009730
[76] 0.11305559 0.09840485 0.08578789 0.07490387 0.06549894 0.05735864
Another alternative could be allowing the user to control for the "tolerance" in order to set what "closeness" is, this can be done by using a simple function:
close <- function(x, value, tol=NULL){
if(!is.null(tol)){
x[abs(x-10) <= tol]
} else {
x[order(abs(x-10))]
}
}
Where x is a vector of values, value is the value of comparison for closeness, and tol is logical, if it's NULL it returns all the "close" values ordered by "closeness" to value, otherwise it returns just the values meeting the condition given in tol.
> close(x, value=10, tol=.7)
[1] 9.617557 10.120992 10.630182 10.630542
> close(x, value=10)
[1] 10.12099196 9.61755743 10.63018180 10.63054170 9.22947285 9.09570538 11.16783243
[8] 8.53773241 11.74870531 7.95056000 7.93823621 12.11854915 12.37719092 7.30532785
[15] 13.04922392 6.80923943 6.65715769 13.63404124 13.75661322 6.01590035 5.80717982
[22] 14.49087793 5.40199462 4.93764782 15.09417699 4.83051356 15.24414627 4.31005838
[29] 4.18947450 16.00601247 3.84350238 16.40246292 3.54966795 3.42958354 16.75709565
[36] 3.06455219 3.00499094 2.74346924 2.54283599 17.46236358 17.46709279 2.46106916
[43] 2.21226576 2.15165780 1.99241551 18.06882072 1.82114213 1.79743107 18.22123225
[50] 1.62380423 1.54222565 18.51050094 1.46857509 18.63563523 1.32927166 1.30703661
[57] 18.71908344 1.20383713 1.10879707 1.09055728 0.98799517 0.94170986 0.80084308
[64] 0.68201911 0.58171175 0.49695298 0.42525021 0.36451350 0.31299262 0.26922281
[71] 0.23197860 0.20023468 0.17313291 0.14995459 0.13009730 0.11305559 0.09840485
[78] 0.08578789 0.07490387 0.06549894 0.05735864
In the first example I defined "closeness" to be at most a difference of 0.7 between value and each elements in x. In the second example the function close returns a vector of values where the firsts are the closest to the value given in value and the lasts are the farest values from value.
Since my solution does not provide an easy (practical) way to find tol as #Arun pointed out, one way to find the closest values would be seting tol=NULL and asking for the exact number of close values as in:
> close(x, value=10)[1:3]
[1] 10.120992 9.617557 10.630182
This shows the three values in x closest to 10.
I can't think of a way without using sort. However, you can speed it up by using partial sort.
x[abs(x-10) %in% sort(abs(x-10), partial=1:2)[1:2]]
# [1] 9.617557 10.120992
In case the same values are present more than once, you'll get all of them here. So, you can either wrap this with unique or you can use match instead as follows:
x[match(sort(abs(x-10), partial=1:2)[1:2], abs(x-10))]
# [1] 10.120992 9.617557
dput output:
dput(x)
c(0.98799517, 1.09055728, 1.20383713, 1.32927166, 1.46857509,
1.62380423, 1.79743107, 1.99241551, 2.21226576, 2.46106916, 2.74346924,
3.06455219, 3.42958354, 3.84350238, 4.31005838, 4.83051356, 5.40199462,
6.01590035, 6.65715769, 7.30532785, 7.93823621, 8.53773241, 9.09570538,
9.61755743, 10.12099196, 10.6301818, 11.16783243, 11.74870531,
12.37719092, 13.04922392, 13.75661322, 14.49087793, 15.24414627,
16.00601247, 16.75709565, 17.46236358, 18.06882072, 18.51050094,
18.71908344, 18.63563523, 18.22123225, 17.46709279, 16.40246292,
15.09417699, 13.63404124, 12.11854915, 10.6305417, 9.22947285,
7.95056, 6.80923943, 5.80717982, 4.93764782, 4.1894745, 3.54966795,
3.00499094, 2.54283599, 2.1516578, 1.82114213, 1.54222565, 1.30703661,
1.10879707, 0.94170986, 0.80084308, 0.68201911, 0.58171175, 0.49695298,
0.42525021, 0.3645135, 0.31299262, 0.26922281, 0.2319786, 0.20023468,
0.17313291, 0.14995459, 0.1300973, 0.11305559, 0.09840485, 0.08578789,
0.07490387, 0.06549894, 0.05735864)
I'm not sure your question is clear, so here's another approach. To find the value closest to your first desired value, 10.12099196 , subtract that from the vector, take the absolute value, and then find the index of the closest element. Explicit:
delx <- abs( 10.12099196 - x)
min.index <- which.min(delx) #returns index of first minimum if there are duplicates
x[min.index] #gets you the value itself
Apologies if this was not the intent of your question.