Calculating proportions of land use per grid cell in R - r

I want to perform a binomial GLM with my data, which is based on the proportions of land use type per used or unused cell on a map. With used being: where an owl was heard an unused being: where no owl was heard. I know have two polygon layers, one containing all the used areas and their land type and one containing all the unused areas and their land types.
I am unsure how to perform this analysis in R. I probably first need to put a grid over the layers, to divide the maps in cells that are either used or unused. Then I need to calculate the proportion of land use type per grid cell.
Are there any packages I could use for this? How would you go about this?

Related

Calculate landscape metrics over other raster column in R (landscapemetrics package)

In R and using the landscapemetrics package, I am wanting to calculate landscape metrics of a raster file that I have that contains different vegetation types. When I import the raster file into R using the stack function, the file contains one layer with multiple levels (see attached image). enter image description here
Subsequently, when I run a function to calculate a landscape metric, or plot the raster, it works with the "Value" level/column (see second image attached). enter image description here. Rather, I want it to calculate the metric over the "Vegetation_Type" level/column directly. However, I do not know how to do this. Currently, when I calculate for example the amount of core area for each vegetation type, it gives me the result in the form of a table that presents "class = 1-7" with the specific core area of that class, rather than "Vegetation_Type = Hummock". I want to have the "class" column with numbers 1-7 to be substituted by the vegetation types (e.g. Hummock, N, K etc.). Is there anyone who knows how to do this who can maybe help me?
Thank you so much in advance, and sorry for the unlogical post. I am still new here and do not really know how to best structure my questions!
Sincerely,
Jasper
The lanscapemetrics package will always use the numeric ID of each class to make sure the output is type stable, i.e, always identical regardless of the input.
But, since the output is simply a tibble, you should be able to just join the information using e.g. dplyr::left_join().

Is it possible to change the exams result to a different scale?

We created an exam with a total of 29 points. However, we would like to present to students the HTML using a 0 to 20 scale (Points row of the HTML; with the nops_eval function):
The corresponding points of each exercise (summing a total of 29) should continue appearing in the Evaluation grid, e.g.:
Keeping the points and evaluation for the exercises as it was and just changing the scaling of the sum of the points is not possible. If you want to do so for the HTML reports, I guess it is easiest to read the individual .html files into R, replace the points with what you want to put there. readLines(), gsub(), and writeLines() should be useful here.
However, what is possible, is to change the points associated with each exercise in the entire evaluation. To do so you can use
nops_eval(..., points = ...)
where points must be a vector of the same length as the number of exercises (22 in your case). This overrules the number of points that was previously set within the exercises (via expoints) or in exams2nops(..., points = ...).

Select multiple points on scatterplot, save selection to new table

I have a very large data set (~250,000 records) that I have used to create a linear model. I have plotted predicted vs. actual
.
I tried to use identify() to select the two cluster of values near the center of the graph and coord() to identify them. There are a few problems here: 1)There are many, many more points in those clusters than I can click on and identify individually, and 2)I need to know ALL of them, select all of them somehow with out selecting any others, and subset my data to just those points.
This model was created using a satellite image paired with ancillary spatial data. Each entry in the table corresponds to a particular point on the map. I need to identify where these two clusters are located on the map. My data frame includes the FID (which I can use to link back to the map), the original predictor, the response, and my predicted values.
I appreciate any help!

Tableau map shapes overlapped

I am trying to render some geographic data onto the map in Tableau. However, some data points located at the same point, so the shape images of the data points overlaps together. By clicking on a shape, you could only get the top one.
How can we distinguish the overlapped data points in Tableau? I know that we can manually exclude the top data to see another, but is there any other way, for example, make a drop down list in the right click menu to select the overlapped data points?
Thank you!
There are a couple of ways to deal with this issue.
Some choices you can try are:
Add some transparency to the marks by editing the color shelf properties. That way at least you get a visual indication when there are multiple marks stacked on top of each other. This approach can be considered a poor man's heat map if you have many points in different areas as the denser/darker sections will have more marks. (But that just affects the appearance and doesn't help you select and view details for marks that are covered by others)
Add some small pseudo-random jitter to each coordinate using calculated fields. This will be easier when Tableau supports a rand() function, but in the meantime you can get creative enough using other fields and the math function to add a little jitter. The goal here is to slightly shift locations enough that they don't stack exactly, but not enough to matter in precision. Depends on the scale.
Make a grid style heat map where the color indicates the number of data points in each grid. To do this, you'll need to create calculated fields to bin together nearby latitudes or longitudes. Say to round each latitude to a certain number of decimal places, or use the hex bin functions in Tableau. Those calculated fields will need to have a geographic role and be treated as continuous dimensions.
Define your visualization to display one mark for each unique location, and then use color or size to indicate the number of data points at that location, as opposed to a mark for each individual data point

Chernoff faces extended in R

The aplpack library contains the possibility to plot beautiful Chernoff faces with faces. symbols and TeachingDemos also offer the possibility to plot variations of these faces. But none of them allows to plot more than 15 dimensions (symbols allows two more dimensions for colours, but they are defined in an inconvenient way so that some faces turn out to be completely black, hiding other parts of the face). Is there a way in R (perhaps with another library) to plot more dimensions, e.g. by adding a body with limbs of different lengths or by using colours to visualise some of the dimensions? Maybe I've overseen something and the colours in aplpack can be mapped to variables as well?
The TeachingDemos package also has the ms.face function that works with the my.symbols function to create a scatterplot with the Chernoff Face as the symbol. This gives the original 15 values of the face, plus an x-coordinate and a y-coordinate; with my.symbols you can also specify a color (for the overall face, not individual features) and an overall size based on variables. That gives 19 dimensions, you could also vary the line width and style, but that will probably distort the plot more than help.
With that many dimensions I would probably go more for the star plots (symbols function) with the variables ordered based on a clustering procedure, or use some type of dimension reduction tool (principal components, grand tour, etc.)

Resources