I would like to count the amount of lines within polygons.
The questions I would like to answer are:
How many lines are within each polygon (enveloped AND intersecting)?
How long is each line within each polygon OR how long are all the lines (combined) within each polygon?
I am using QGIS 2.18.12 (do not know how to write code)
first calculate all lines lenght,
second use Select By Location proccess (intersect etc.),
last one is Statistics By Categories proccess.
Related
We created an exam with a total of 29 points. However, we would like to present to students the HTML using a 0 to 20 scale (Points row of the HTML; with the nops_eval function):
The corresponding points of each exercise (summing a total of 29) should continue appearing in the Evaluation grid, e.g.:
Keeping the points and evaluation for the exercises as it was and just changing the scaling of the sum of the points is not possible. If you want to do so for the HTML reports, I guess it is easiest to read the individual .html files into R, replace the points with what you want to put there. readLines(), gsub(), and writeLines() should be useful here.
However, what is possible, is to change the points associated with each exercise in the entire evaluation. To do so you can use
nops_eval(..., points = ...)
where points must be a vector of the same length as the number of exercises (22 in your case). This overrules the number of points that was previously set within the exercises (via expoints) or in exams2nops(..., points = ...).
I have two texts that I convert to bag of words. One bag of words for text 1, one bag of words for text 2.
I am trying to find a way to plot both those documents' words together to understand how much they are different.
One way I Was thinking is to have two barplots one over the other and see in which words (word count) they are the same and in which they differ.
I was able to launch a simple bar plot
from the guide here
http://www.sthda.com/english/wiki/text-mining-and-word-cloud-fundamentals-in-r-5-simple-steps-you-should-know
(see last plot)
but I have now two bar plots that I can not directly compare.
I was thinking for example to put the words together on the same plot.
Either as two histograms one over the other or create some 2d clustering, showing areas of words that the two documents are different but also their overlapping areas.
Which package will you suggest and procedure to compare such two bags of words?
Thanks
Alex
I'm trying to do some GIS work using R. Specifically, I have a spatialpointsdataframe (called 'points') and a spatiallinesdataframe (called 'lines). I want to know the closest line to each point. I do this:
# make a new field to hold the line ID
points#data$nearest_line <- as.character('')
# Loop through data. For each point, get ID of nearest line and store it
for (i in 1:nrow(points)){
points#data[i,"nearest_line"] <-
lines[which.min(gDistance(points[i,], lines, byid= TRUE)),]#data$line_id
}
This works fine. My issue is the size of my data. I've 4.5m points, and about 100,000 lines. It's been running for about a day so far, and has only done 200,000 of the 4.5m points (despite a fairly powerful computer).
Is there something I can do to speed this up? For example if I was doing this in PostGIS I would add a spatial index, but this doesn't seem to be an option in R.
Or maybe I'm approaching this totally wrong?
I have analysed tree core images through the raster package in an attempt to perform image analysis. In the image:
http://dx.doi.org/10.6084/m9.figshare.1555854
You can see the measured "vessels" (black and numbered) and also annual lines (red) which have been drawn using the locator function and represent each year of growth of the tree core.
By generating a list of the maximum y coordinates of each annual line I have been able to sort the vessels into years for this image. Which is what I am looking for. However, it has occurred to me that in reality things can get a little more difficult as seen in the next image:
http://figshare.com/articles/Complicated/1555855
The approach above will not work on this image as vessels from each year overrun so using the maximum y coordinates will not return the correct result.
So can anyone suggest another approach which may overcome this limitation? I have thought about using spatialpolygons but not sure this will achieve what I am looking for.
If you are creating the lines by clicking on the plot, you can use raster function drawLine or, for polygons, drawPoly. You could rasterize the polygons and mask that with the original image to get the vessels grouped by polygon (year).
I am trying to render some geographic data onto the map in Tableau. However, some data points located at the same point, so the shape images of the data points overlaps together. By clicking on a shape, you could only get the top one.
How can we distinguish the overlapped data points in Tableau? I know that we can manually exclude the top data to see another, but is there any other way, for example, make a drop down list in the right click menu to select the overlapped data points?
Thank you!
There are a couple of ways to deal with this issue.
Some choices you can try are:
Add some transparency to the marks by editing the color shelf properties. That way at least you get a visual indication when there are multiple marks stacked on top of each other. This approach can be considered a poor man's heat map if you have many points in different areas as the denser/darker sections will have more marks. (But that just affects the appearance and doesn't help you select and view details for marks that are covered by others)
Add some small pseudo-random jitter to each coordinate using calculated fields. This will be easier when Tableau supports a rand() function, but in the meantime you can get creative enough using other fields and the math function to add a little jitter. The goal here is to slightly shift locations enough that they don't stack exactly, but not enough to matter in precision. Depends on the scale.
Make a grid style heat map where the color indicates the number of data points in each grid. To do this, you'll need to create calculated fields to bin together nearby latitudes or longitudes. Say to round each latitude to a certain number of decimal places, or use the hex bin functions in Tableau. Those calculated fields will need to have a geographic role and be treated as continuous dimensions.
Define your visualization to display one mark for each unique location, and then use color or size to indicate the number of data points at that location, as opposed to a mark for each individual data point