I'm not sure if it's a meaningful question but I don't understand how (and if) is it possible to combine a list of ppp objects into a unique ppp object. For example
library(spatstat)
#> Loading required package: spatstat.data
#> Loading required package: nlme
#> Loading required package: rpart
#>
#> spatstat 1.62-2 (nickname: 'Shape-shifting lizard')
#> For an introduction to spatstat, type 'beginner'
ppp1 <- ppp(runif(20), runif(20), c(0,1), c(0,1))
ppp2 <- ppp(runif(20), runif(20), c(0,1), c(0,1))
do.call("rbind", list(ppp1, ppp2))
#> window n x y markformat
#> [1,] List,4 20 Numeric,20 Numeric,20 "none"
#> [2,] List,4 20 Numeric,20 Numeric,20 "none"
do.call("ppp", list(ppp1, ppp2))
#> Error in ppp(structure(list(window = structure(list(type = "rectangle", : is.numeric(x) is not TRUE
Created on 2020-01-30 by the reprex package (v0.3.0)
I think that the result should be a ppp object created by rbinding the coordinates and marks and window object should be the union of the owin objects. Is that a reasonable idea? Is that already coded or documented somewhere?
It was a stupid question, it's all documented here: https://rdrr.io/cran/spatstat/man/superimpose.html.
Related
Using testthat, I want to check if two plotly objects are the same.
Reproducible data:
library(plotly)
p1 <- plot_ly(economics, x = ~pop)
p2 <- plot_ly(economics, x = ~pop)
I am searching for a code equivalent to testthat::expect_equivalent in priority (should be similar for testthat::expect_equal and testthat::expect_identical functions.
testthat::expect_equivalent(p1, p2)
Error: p1 not equivalent to p2.
Component “x”: Component “visdat”: Component 1: Component “id”: 1 string mismatch
Component “x”: Component “visdat”: Component 1: Component “p”: Component “cur_data”: 1 string mismatch
Component “x”: Component “cur_data”: 1 string mismatch
I have found a way to test for equality of the mismatch component.
I overwrite components that are equal (visdat here) and have random ids in the plotly object, then test for equality.
testthat::expect_equal(
object = p2$x$visdat[[1]](),
expected = p1$x$visdat[[1]]()
)
p2$x$visdat[1] <- p1$x$visdat[1]
names(p2$x$visdat) <- names(p1$x$visdat)
p2$x$cur_data <- p1$x$cur_data
testthat::expect_equivalent(p1, p2)
Testthat code of plotly package does not help https://rdrr.io/cran/plotly/src/tests/testthat/test-plotly.R
Is there a more straightforward way to check if two plotly objects are equivalent?
Thanks a lot for any help
I find it hard to keep up with what is current and what is deprecated in testthat, but I believe currently testthat::expect_equal will call waldo::compare to compare the two objects.
The waldo::compare function will make use of a compare_proxy method on the object to remove unimportant differences before making the comparison. So what you need to do is to find or write a compare_proxy.plotly method. It only sees one object at a time, so you can't transfer names from one to the other, but you can put in standard names or remove the names completely.
For example, the proxy below works on your example. A more extensive example might need other modifications:
library(waldo)
library(plotly)
#> Loading required package: ggplot2
#>
#> Attaching package: 'plotly'
#> The following object is masked from 'package:ggplot2':
#>
#> last_plot
#> The following object is masked from 'package:stats':
#>
#> filter
#> The following object is masked from 'package:graphics':
#>
#> layout
p1 <- plot_ly(economics, x = ~pop)
p2 <- plot_ly(economics, x = ~pop)
compare_proxy.plotly <- function(x, path = "x") {
names(x$x$visdat) <- "proxy"
e <- environment(x$x$visdat$proxy)
# Maybe we should follow the recursion, but not now.
e$p <- NULL
e$id <- "proxy"
x$x$cur_data <- "proxy"
names(x$x$attrs) <- "proxy"
list(object = x, path = paste0("compare_proxy(", path, ")"))
}
waldo::compare(p1, p2)
#> ✔ No differences
testthat::local_edition(3)
testthat::expect_equal(p1, p2)
# Use a different but identical dataset
economics2 <- economics
p3 <- plot_ly(economics2, x = ~pop)
testthat::expect_equal(p1, p3)
# Use a slightly different dataset
economics2$pce[1] <- 0
p4 <- plot_ly(economics2, x = ~pop)
testthat::expect_equal(p1, p4)
#> Error: `p1` (`actual`) not equal to `p4` (`expected`).
#>
#> environment(compare_proxy(actual)$x$visdat$proxy)$data vs environment(compare_proxy(expected)$x$visdat$proxy)$data
#> pce
#> - environment(compare_proxy(actual)$x$visdat$proxy)$data[1, ] 506.7
#> + environment(compare_proxy(expected)$x$visdat$proxy)$data[1, ] 0.0
#> environment(compare_proxy(actual)$x$visdat$proxy)$data[2, ] 509.8
#> environment(compare_proxy(actual)$x$visdat$proxy)$data[3, ] 515.6
#> environment(compare_proxy(actual)$x$visdat$proxy)$data[4, ] 512.2
#>
#> environment(compare_proxy(actual)$x$visdat$proxy)$data$pce[1:4] vs environment(compare_proxy(expected)$x$visdat$proxy)$data$pce[1:4]
#> - 507
#> + 0
#> 510
#> 516
#> 512
#>
#> environment(compare_proxy(actual)$x$visdat$proxy)$plotlyVisDat vs environment(compare_proxy(expected)$x$visdat$proxy)$plotlyVisDat
#> pce
#> - environment(compare_proxy(actual)$x$visdat$proxy)$plotlyVisDat[1, ] 506.7
#> + environment(compare_proxy(expected)$x$visdat$proxy)$plotlyVisDat[1, ] 0.0
#> environment(compare_proxy(actual)$x$visdat$proxy)$plotlyVisDat[2, ] 509.8
#> environment(compare_proxy(actual)$x$visdat$proxy)$plotlyVisDat[3, ] 515.6
#> environment(compare_proxy(actual)$x$visdat$proxy)$plotlyVisDat[4, ] 512.2
#>
#> environment(compare_proxy(actual)$x$visdat$proxy)$plotlyVisDat$pce[1:4] vs environment(compare_proxy(expected)$x$visdat$proxy)$plotlyVisDat$pce[1:4]
#> - 507
#> + 0
#> 510
#> 516
#> 512
Created on 2023-01-25 with reprex v2.0.2
I would like to change the resolution of a raster. For example, let’s take
this Landsat 7 images at ~ 30m resolution.
library(terra)
#> terra 1.5.21
f <- system.file("tif/L7_ETMs.tif", package = "stars")
r <- rast(f)
# 30m x 30m resolution
res(r)
#> [1] 28.5 28.5
plot(r, 1)
I can use aggregate() with an integer factor such as:
# 10 * 28.5
r2 <- aggregate(r, fact = 10)
res(r2)
#> [1] 285 285
plot(r2, 1)
My question is, how can I specify an exact resolution. For example, I would
like to have a pixel resolution of 1.234 km (1234 m).
fact <- 1234 / 28.5
fact
#> [1] 43.29825
r3 <- aggregate(r, fact = fact)
res(r3)
#> [1] 1225.5 1225.5
plot(r3, 1)
The documentation says that fact should be an integer, so here it is
flooring fact to 43.
res(aggregate(r, 43))
#> [1] 1225.5 1225.5
Any ways to have an exact resolution of 1234 m?
Created on 2022-04-28 by the reprex package (v2.0.1)
I came up with this solution which seems to give me what I need.
library(terra)
#> terra 1.5.21
f <- system.file("tif/L7_ETMs.tif", package = "stars")
r <- rast(f)
plot(r, 1)
r2 <- r
res(r2) <- 1234
r2 <- resample(r, r2)
plot(r2, 1)
res(r2)
#> [1] 1234 1234
Created on 2022-04-28 by the reprex package (v2.0.1)
I also propose (as described in the terra vignette) that you first aggregate the raster as close as possible and then resample. Resampling can be done e.g. using a template raster to guarantee correct crs, dimensions etc.
Preamble
I've looked through other questions (1, 2, 3) describing the use and function of set.seed() and .Random.seed and can't find this particular issue documented so here it is as a question:
Inital Observation
When I inspect the .Random.seeds generated as a result of set.seed(1) and set.seed(2), I find that the first two elements are always the same (10403 & 624) while the rest appears not to be. See example below.
My questions
Is that expected?
Why does it happen?
Will this have any untoward consequenses for any random simulation I
might do based on it?
Reproducible Example
f <- function(s1, s2){
set.seed(s1)
r1 <- .Random.seed
set.seed(s2)
r2 <- .Random.seed
print(r1[1:3])
print(r2[1:3])
plot(r1, r2)
}
f(1, 2)
#> [1] 10403 624 -169270483
#> [1] 10403 624 -1619336578
Created on 2022-01-04 by the reprex package (v2.0.1)
Note that the first two elements of each .Random.seed are identical but the remainder is not. You can see in the scatterplot that it's just a random cloud as expected.
Expanding helpful comments from #r2evans and #Dave2e into an answer.
1) .Random.seed[1]
From ?.Random.seed, it says:
".Random.seed is an integer vector whose first element codes the
kind of RNG and normal generator. The lowest two decimal digits are in
0:(k-1) where k is the number of available RNGs. The hundreds
represent the type of normal generator (starting at 0), and the ten
thousands represent the type of discrete uniform sampler."
Therefore the first value doesn't change unless one changes the generator method (RNGkind).
Here is a small demonstration of this for each of the available RNGkinds:
library(tidyverse)
# available RNGkind options
kinds <- c(
"Wichmann-Hill",
"Marsaglia-Multicarry",
"Super-Duper",
"Mersenne-Twister",
"Knuth-TAOCP-2002",
"Knuth-TAOCP",
"L'Ecuyer-CMRG"
)
# test over multiple seeds
seeds <- c(1:3)
f <- function(kind, seed) {
# set seed with simulation parameters
set.seed(seed = seed, kind = kind)
# check value of first element in .Random.seed
return(.Random.seed[1])
}
# run on simulated conditions and compare value over different seeds
expand_grid(kind = kinds, seed = seeds) %>%
pmap(f) %>%
unlist() %>%
matrix(
ncol = length(seeds),
byrow = T,
dimnames = list(kinds, paste0("seed_", seeds))
)
#> seed_1 seed_2 seed_3
#> Wichmann-Hill 10400 10400 10400
#> Marsaglia-Multicarry 10401 10401 10401
#> Super-Duper 10402 10402 10402
#> Mersenne-Twister 10403 10403 10403
#> Knuth-TAOCP-2002 10406 10406 10406
#> Knuth-TAOCP 10404 10404 10404
#> L'Ecuyer-CMRG 10407 10407 10407
Created on 2022-01-06 by the reprex package (v2.0.1)
2) .Random.seed[2]
At least for the default "Mersenne-Twister" method, .Random.seed[2] is an index that indicates the current position in the random set. From the docs:
The ‘seed’ is a 624-dimensional set of 32-bit integers plus a current
position in that set.
This is updated when random processes using the seed are executed. However for other methods it the documentation doesn't mention something like this and there doesn't appear to be a clear trend in the same way.
See below for an example of changes in .Random.seed[2] over iterative random process after set.seed().
library(tidyverse)
# available RNGkind options
kinds <- c(
"Wichmann-Hill",
"Marsaglia-Multicarry",
"Super-Duper",
"Mersenne-Twister",
"Knuth-TAOCP-2002",
"Knuth-TAOCP",
"L'Ecuyer-CMRG"
)
# create function to run random process and report .Random.seed[2]
t <- function(n = 1) {
p <- .Random.seed[2]
runif(n)
p
}
# create function to set seed and iterate a random process
f2 <- function(kind, seed = 1, n = 5) {
set.seed(seed = seed,
kind = kind)
replicate(n, t())
}
# set simulation parameters
trials <- 5
seeds <- 1:2
x <- expand_grid(kind = kinds, seed = seeds, n = trials)
# evaluate and report
x %>%
pmap_dfc(f2) %>%
mutate(n = paste0("trial_", 1:trials)) %>%
pivot_longer(-n, names_to = "row") %>%
pivot_wider(names_from = "n") %>%
select(-row) %>%
bind_cols(x[,1:2], .)
#> # A tibble: 14 x 7
#> kind seed trial_1 trial_2 trial_3 trial_4 trial_5
#> <chr> <int> <int> <int> <int> <int> <int>
#> 1 Wichmann-Hill 1 23415 8457 23504 2.37e4 2.28e4
#> 2 Wichmann-Hill 2 21758 27800 1567 2.58e4 2.37e4
#> 3 Marsaglia-Multicarry 1 1280795612 945095059 14912928 1.34e9 2.23e8
#> 4 Marsaglia-Multicarry 2 -897583247 -1953114152 2042794797 1.39e9 3.71e8
#> 5 Super-Duper 1 1280795612 -1162609806 -1499951595 5.51e8 6.35e8
#> 6 Super-Duper 2 -897583247 224551822 -624310 -2.23e8 8.91e8
#> 7 Mersenne-Twister 1 624 1 2 3 4
#> 8 Mersenne-Twister 2 624 1 2 3 4
#> 9 Knuth-TAOCP-2002 1 166645457 504833754 504833754 5.05e8 5.05e8
#> 10 Knuth-TAOCP-2002 2 967462395 252695483 252695483 2.53e8 2.53e8
#> 11 Knuth-TAOCP 1 1050415712 999978161 999978161 1.00e9 1.00e9
#> 12 Knuth-TAOCP 2 204052929 776729829 776729829 7.77e8 7.77e8
#> 13 L'Ecuyer-CMRG 1 1280795612 -169270483 -442010614 4.71e8 1.80e9
#> 14 L'Ecuyer-CMRG 2 -897583247 -1619336578 -714750745 2.10e9 -9.89e8
Created on 2022-01-06 by the reprex package (v2.0.1)
Here you can see that from the Mersenne-Twister method, .Random.seed[2] increments from it's maximum of 624 back to 1 and increased by the size of the random draw and that this is the same for set.seed(1) and set.seed(2). However the same trend is not seen in the other methods. To illustrate the last point, see that runif(1) increments .Random.seed[2] by 1 while runif(2) increments it by 2:
# create function to run random process and report .Random.seed[2]
t <- function(n = 1) {
p <- .Random.seed[2]
runif(n)
p
}
set.seed(1, kind = "Mersenne-Twister")
replicate(9, t(1))
#> [1] 624 1 2 3 4 5 6 7 8
set.seed(1, kind = "Mersenne-Twister")
replicate(5, t(2))
#> [1] 624 2 4 6 8
Created on 2022-01-06 by the reprex package (v2.0.1)
3) Sequential Randoms
Because the index or state of .Random.seed (apparently for all the RNG methods) advances according to the size of the 'random draw' (number of random values genearted from the .Random.seed), it is possible to generate the same series of random numbers from the same seed in different sized increments. Furthermore, as long as you run the same random process at the same point in the sequence after setting the same seed, it seems that you will get the same result. Observe the following example:
# draw 3 at once
set.seed(1, kind = "Mersenne-Twister")
sample(100, 3, T)
#> [1] 68 39 1
# repeat single draw 3 times
set.seed(1, kind = "Mersenne-Twister")
sample(100, 1)
#> [1] 68
sample(100, 1)
#> [1] 39
sample(100, 1)
#> [1] 1
# draw 1, do something else, draw 1 again
set.seed(1, kind = "Mersenne-Twister")
sample(100, 1)
#> [1] 68
runif(1)
#> [1] 0.5728534
sample(100, 1)
#> [1] 1
Created on 2022-01-06 by the reprex package (v2.0.1)
4) Correlated Randoms
As we saw above, two random processes run at the same point after setting the same seed are expected to give the same result. However, even when you provide constraints on how similar the result can be (e.g. by changing the mean of rnorm() or even by providing different functions) it seems that the results are still perfectly correlated within their respective ranges.
# same function with different constraints
set.seed(1, kind = "Mersenne-Twister")
a <- runif(50, 0, 1)
set.seed(1, kind = "Mersenne-Twister")
b <- runif(50, 10, 100)
plot(a, b)
# different functions
set.seed(1, kind = "Mersenne-Twister")
d <- rnorm(50)
set.seed(1, kind = "Mersenne-Twister")
e <- rlnorm(50)
plot(d, e)
Created on 2022-01-06 by the reprex package (v2.0.1)
I am writing some unit tests for an R package using testthat. I would like to compare two objects where not all the details need to match, but they must maintain equivalence with respect to a set of functions of interest.
For a simple example, I want to use something like
library(testthat)
x <- 1:4
y <- matrix(4:1, nrow=2)
test_that("objects behave similarly", {
expect_equal_applied(x, y, .fn=list(sum, prod))
## which would be shorthand for:
## expect_equal(sum(x), sum(y))
## expect_equal(prod(x), prod(y))
})
In practice, x and y might be S3 objects, not simply base data structures.
Obviously, this is simple to implement, but I'd prefer something idiomatic if already existing. So, the question is, does testthat implement an expect function like this?
Searching through the API, nothing struck me as fitting this description, but it seems like a natural pattern. Or maybe there is a reason why such a pattern is objectionable that I'm overlooking.
Looking at the documentation {testthat} has currently (third edition) no function like expect_equal_applied. But, as you mention already, we can construct such a function easily:
library(testthat)
x <- 1:4
y <- matrix(4:1, nrow=2)
expect_equal_applied <- function(object, expected, fns) {
fns <- purrr::map(fns, rlang::as_function)
purrr::map(fns, ~ expect_equal(.x(object), .x(expected)))
}
test_that("objects behave similarly", {
expect_equal_applied(x, y, fns = list(sum, prod))
})
#> Test passed
x <- 1:3
test_that("objects behave similarly", {
expect_equal_applied(x, y, fns = list(sum, prod))
})
#> -- Failure (<text>:19:3): objects behave similarly -----------------------------
#> .x(object) not equal to .x(expected).
#> 1/1 mismatches
#> [1] 6 - 10 == -4
#> Backtrace:
#> 1. global::expect_equal_applied(x, y, fns = list(sum, prod))
#> 2. purrr::map(fns, ~expect_equal(.x(object), .x(expected)))
#> 3. .f(.x[[i]], ...)
#> 4. testthat::expect_equal(.x(object), .x(expected))
#>
#> -- Failure (<text>:19:3): objects behave similarly -----------------------------
#> .x(object) not equal to .x(expected).
#> 1/1 mismatches
#> [1] 6 - 24 == -18
#> Backtrace:
#> 1. global::expect_equal_applied(x, y, fns = list(sum, prod))
#> 2. purrr::map(fns, ~expect_equal(.x(object), .x(expected)))
#> 3. .f(.x[[i]], ...)
#> 4. testthat::expect_equal(.x(object), .x(expected))
Created on 2021-09-17 by the reprex package (v2.0.1)
Regarding why such a function seems to be missing in {testthat}, I think that it isn't really necessary given, that we can construct it with lapply or map.
I have a script using the R package 'concaveman', but due to issues on the ubuntu platform that I need to run the code on I cannot install this package (it has taken me three days trying to solve it). So I am looking for an alternative.
I have a random set of points ranging from 3 to 1000s of points. I want to draw a convex hull/polygon around the outer most points (step after would be to rasterize). I have been trying to do it by converting the points to a raster, then use rastertopolygons, but in rare occasions points would be in the same raster cell resulting in only two unique points. Convaveman would make this into a linear polygon (which is what I want, without using concaveman). Here is the input data that would be problematic:
x <- structure(list(x = c(166.867, 166.867, 167.117, 166.8667), y = c(-20.6333,
-20.633, -20.833, -20.6333)), row.names = c(NA, -4L), class = c("tbl_df",
"tbl", "data.frame"))
This is what I tried not (with the error I get):
SP_pt <- SpatialPoints(x, proj4string=crs("+proj=longlat +ellps=WGS84 `+towgs84=0,0,0,0,0,0,0 +no_defs"))`
gridded(SP_pt) <- T
SP_pt_R <- raster(SP_pt)
SP_poly <- rasterToPolygons(SP_pt_R, dissolve = T)
suggested tolerance minimum: 0.333333
Error in points2grid(points, tolerance, round) :
dimension 1 : coordinate intervals are not constant
You can use chull in base R:
sp::Polygon(x[c(chull(x), chull(x)[1]), ])
#> An object of class "Polygon"
#> Slot "labpt":
#> [1] 166.95023 -20.69977
#>
#> Slot "area":
#> [1] 6.75e-05
#>
#> Slot "hole":
#> [1] FALSE
#>
#> Slot "ringDir":
#> [1] 1
#>
#> Slot "coords":
#> x y
#> [1,] 167.1170 -20.8330
#> [2,] 166.8667 -20.6333
#> [3,] 166.8670 -20.6330
#> [4,] 167.1170 -20.8330
Or if you want to use the sf package:
sf::st_polygon(list(as.matrix(x[c(chull(x), chull(x)[1]),])))
#> POLYGON ((167.117 -20.833, 166.8667 -20.6333, 166.867 -20.633, 167.117 -20.833))
You can use dismo::convHull and then use predict or rasterize
library(dismo)
xy <- cbind(x=c(1,1,2,2), y=c(3,2,1,2))
# must be matrix or data.frame, not a tbl
ch <- convHull(xy)
plot(ch)
# predict
r <- raster(xmn=0, xmx=5, ymn=0, ymx=5, res=.25)
p <- predict(ch, r)
# Or rasterize
sp <- polygons(ch)
x <- rasterize(sp, r)
For faster rasterization you can use terra
library(terra)
v <- vect(sp)
rr <- rast(r)
y <- rasterize(v, rr)
To cast sp to sf
sf <- as(sp, "sf")