Mapping species occurrences in R - r

I'm quite beginner at R so I'm struggling with what I've found on google for how to plot species occurrence data points in R (I know how in QGIS but my supervisors want R) and then fill in 10km or 1km grid squares where the species has occurred. The photo shows what I mean but has been produced in DMap rosemarybeetlemap
The main issue I have is that my csv file of records only has alphanumeric Ordnance Survey grid references - can R plot with these or do they need to be split into easting/northings or even decimal latitude/longitude? and if so, how?
Any help is greatly appreciated!

Locations detailed in Ordnance Survey’s National Grid (NGR) format in contrast to using eastings and northings (in metres) splits the Great Britain into lettered grid squares and then defines locations within each lettered grid square. You know but some explanation you'll find here as a beginners guide to finding grid references).
For your main issue following article maybe a solution for a first step Converting (British) National Grid references.
I quoted some info below (credits to mikerspencer and Claudia Vitolo).
There’s no need to write a script from scratch to convert grid references, someone has done it already! There is some legwork to do in getting your NGR coordinates in a format ready for the conversion. The script that follows does just that, taking a csv file as your start and end point.

Related

Is it possible to map species occurrence points in R when only have OS 6 figure grid refs?

I have a relatively large-ish (>4,000 records) dataset of species records that needs to be mapped in R, however the only spatial information with them are 6 figure Ordnance Survey’s National Grid (NGR) format grid references (e.g. SD311124, see also A beginners guide to finding grid references). The file format is CSV.
How can I get R to plot the points with this information?
Is there a better way to do it within R rather than bulk converting and adding lat/long coords to the spreadsheet before loading it into R?
I know how to do this in QGIS, but my supervisors would like it in R instead!

Is there a way to calculate the area of a building in OSMNX?

Let me start of by stating that I'm new to programming in Python and working with OSMNX.
I have the building footprints saved in a Geodataframe as:
gdf = ox.geometries_from_polygon(P, tags={'building':True}) # Geo dataframe for buildings
If I open this geodataframe, I see the polygons that are used for the buildings (among many other things). My question is, how can I add another column that will be able to calculate the area for each building from this polygon?
Help is much appreciated :) !

How to calculate area of polygons from a large shapefile

Summary:
I'm trying to calculate the area of a large number of polygons in R. I've read a few posts about how I might do this (Example #1 & Example #2) but the problem I'm having is that my shapefile is too large (1.7gb) to import. Given I can't import the file, I can't calculate the area of the polygons.
Extended Explanation:
I'm actually trying to calculate the area of properties in Victoria, Australia. The polygons represent these properties. I downloaded the simplified models 1 and 2 of VicMaps from Spatial Datamart for all of Victoria.
However, given the size of the shapefiles, I had to narrow my search to just one local government area (LGA) and calculated the polygon areas (just for testing). The shapefile was 15.5MB.
library(raster)
x <- shapefile("D:/Downloads/SDM616230/ll_gda94/shape/lga_polygon/ballarat/VMPROP/PROPERTY_PRIMARY_APPROVED.shp")
crs(x)
x$area_sqkm <- area(x) / 1000000
This worked but its not a practical solution to my problem given there's many LGAs in Victoria and I plan to eventually follow the same process for Queensland and NSW.
However, trying to load a larger shapefile doesn't work and results in the below error code "Error: memory exhausted (limit reached?)".
I've tried using readShapePoly, readogr, st_read and read_sf to get the large shapefile into R but they don't work. I think the file is just too large. I tried using a select query within read_sf in an effort to reduce the size of the file I was reading but that didn't work either. I've read online that I should seek to split the shapefile into just the data I need to reduce the size - but I have no idea how to do that.
Hope you can help.
Obviously the file is too big for a single box. I think the options then are either
1) split the files into smaller ones, process one by one. See
https://gis.stackexchange.com/questions/195508/split-a-shapefile-into-smaller-files-on-linux-command-line
2) use some dbms or data warehouse to do it, they do such batching automatically.

How to plot the data I read from a file in R?

Our instructor assigned us with typing a R script. We don't have any study paper or source for the codes that the instructor typed on class so I'm trying to get help from the articles on internet but I still couldn't find a guide for what I need. Please don't get me wrong, I don't request for someone to do my homework I'm just looking for some tips or any guide links that can help me. When I search on google, not all of the results are related to this and they are usually not helping me or too complicated. The assignment is:
Read data from a .txt file. (I researched and learned how to read data but my problem is I don't know which type of data should I type on the text file to make it plottable by average, standart deviation, histogram etc.)
On the first screen, plot the data, plot the average and plot the standart deviation as line
on the second screen, plot a line from corner to corner and sort the values on it
third screen, plot the data as histogram and plot the distribution function on it
4th screen, plot the anomaly and anomaly line = 0, then make the values that are higher than the anomaly line with different pch than the ones that is lower
finally get the png of 4 screens (i found how to do this)
Thanks.
Which type of data?
You should use metric data. For example the height/age of pople.
For example let's assuhe you have a dataframe yourDataframe:
height
160
155
176
153
185
On the first screen, plot the data
You can use R's standard plot function there: lines(yourDataframe$height)
plot the average and plot the standart deviation
There are already function for those things (for example mean(yourDataframe$height)). Just ask Google.
You can add those values to your linechart using points(mean(yourDataframe$height)).
I think after you did this you will be able to solve the rest of your assignment by yourself. R has quite a big community and you will find everything you need by googling. I guess this is how most people learn R.

Using R for extracing data from colour image

I have a scanned map from which i would like to extract the data into form of Long Lat and the corresponding value. Can anyone please tell me about how i can extract the data from the map. Is there any packages in R that would enable me to extract data from the scanned map. Unfortunately, i cannot find the person who made this map.
Thanks you very much for your time and help.
Take a look at OCR. I doubt you'll find anything for R, since R is primarily a statistical programming language.
You're better off with something like opencv
Once you find the appropriate OCR package, you will need to identify the x and y positions of your characters which you can then use to classify them as being on the x or y axis of your map.
This is not trivial, but good luck
Try this:
Read in the image file using the raster package
Use the locator() function to click on all the lat-long intersection points.
Use the locator data plus the lat-long data to create a table of lat-long to raster x-y coordinates
Fit a radial (x,y)->(r,theta) transformation to the data. You'll be assuming the projected latitude lines are circular which they seem to be very close to but not exact from some overlaying I tried earlier.
To sample from your image at a lat-long point, invert the transformation.
The next hard problem is trying to get from an image sample to the value of the thing being mapped. Maybe take a 5x5 grid of pixels and average, leaving out any gray pixels. Its even harder than that because some of the colours look like they are made from combining pixels of two different colours to make a new shade. Is this the best image you have?
I'm wondering what top-secret information has been blanked out from the top left corner. If it did say what the projection was that would help enormously.
Note you may be able to do a lot of the process online with mapwarper:
http://mapwarper.net
but I'm not sure if it can handle your map's projection.

Resources