The unit of area in R package {UScensus2010} - r

I am using the {UScensus2010} package in R and trying to get the area for each county. I found the areaPoly() in the package. Does anyone know the unit of the area? Is it square mile?
Thank you.

Assuming you are using US Census data, this is from the explanation of the dataset that UScensus2010 links to:
Land area measurement in square meters. The accuracy of the area
measurement is limited by the inaccuracy inherent in the mapping of
the various boundary features in the Census Bureau’s geographic
database. Land area includes areas classified as intermittent water,
swamps, and glaciers, which appear on census maps and in the Census
Bureau’s geographic database as hydrographic features. Square miles
can be derived by dividing square meters by 2,589,988. See Appendix A,
“Geographic Terms and Concepts,” for definition of this field.
http://www.census.gov/prod/cen2010/doc/sf1.pdf
If you are still unsure, pick your home county and check it against the area that wikipedia or the official county website claims.

Related

Plot coastline and calculate distances (marmap and ggmap)

I am working on a research project in marine ecology, using R, and I would like to create a map of a small and precise part of the French Mediterranean coast. From this map I would like to add the different fish collection sites in order to calculate the distances between these sites, taking into account the topology of the coast (the sites being very close to the coast). I have used the marmap package to do this, however due to the size of the map I wish to create, the resolution is very poor and the map is unworkable.
data <- getNOAA.bathy(lon1 =2.97,lon2 =3.53,lat1 =41.9,lat2 =42.3,resolution = 1)
I would like to know if there is an alternative, such as using the ggmap package to get a map with a good resolution, then import the GPS points of the sites and calculate the distances between them using marmap ? Are the two packages compatible?
Do you have any other ideas?
I'd recommend using leaflet for mapping and geosphere to find the Haversine (as the crow flies` distance betwen points.

find out which sampling points are in the same geographical rectangle and extract this information

this is my first time asking a question here. I hope I arrive to formulate it precise enough.
I'm a marine biologist working with biological data sampled at different sites in the North Sea and the English Channel. My data constitute of the longitude and latitude of every sampling site as well a name / number of each sampling site arranged in columns.
The sampling area is devided in statisticle rectangles according to the CRS grid measuring one degree longitude and 0.5 degree latitude. I want to know which sampling sites are in the same statisticle rectangle and to extract this information as additional column in my dataset.
I tried to use the code provided here: https://gis.stackexchange.com/questions/210092/plotting-square-grids-on-a-map-and-extracting-each-grid-information-using-r
and to adapt it to my purpose, but I do not succeed. Basically I am stucked to create a grid that represents the grid of the world map zoomed in to my study region and with the grid cell size described above as a SpatialGrid object.
Can someone help me with this or has a different idea how to approach my objective?
Thank you very much and have a nice day!

How to simulate river level rise in R

I need to make a simulation to see what areas would be affected if the sea level rises in X meters. Could anyone give me tips were to start? I've search for tools embedded in the google maps API but didn't find any workaround.
The idea is to create a function such as this:
isAffected <- function( coordinate, metersRised)
---- return True if it is affected, false otherwise
Thanks in advance!
First reaction is I can't see there being any quick straightforward solution with off the shelf R libraries/data sets on top of which to build a function like that. Second is wondering if you'd like to model it or rely on already developed products, or something in the middle. The most rigorous would be applying a hydrodynamic model and the other bookend is sampling someone else's grid of anticipated results.
Just for context, For river level affected by sea level rise near the coast, you may want to consider variable river stages if they vary quite a bit. If the rivers are running high due to recent storms or snowmelt events, it will worsen flooding from sea level rise alone. So maybe you could assume a limited number of river heights (say rainy season - high, dry season - low). Tides complicate things too, as do storms and storm surge - basically above average ocean heights due to the temporary very low pressure. An example worst cast scenario with those three components is, how much of x city (regional coastline) would be flooded, say New Orleans or Australian coast, during storm surge, a high tide, and the local river very full from spring snowmelt, with 5 feet more extra sea level added, so lots of data needs to consider - eg you may want some sort of x,y,z data for those river height assumptions. Lots of cities have inundation maps where you can get those river stage elevations. The bigger the sea level rise assumption, the less the rivers might matter. Eg, a huge sea level rise scenario could easily inundate the whole city as it is today, no matter how high the river is, with the mouth of the river moving miles inland.
Simplyifying things, I'd say the most important data will be the digital elevation model (DEM), probably a raster file of x,y,z coordinates, with z being the key piece - the elevation of a pixel at every xy location above some certain datum. Higher resolution DEMs will give much more detailed and realistic inundation. Processed LiDAR data is maybe ideal - very high resolution data that some else has produced - raw LiDAR data is a burden. There's at least some here for New Zealand - http://opentopo.sdsc.edu/datasets - but I'm not sure of good warehouses for data outside the US.
A basic workflow might be, decide what hydraulic components you'll consider and how many scenarios. Eg, you'll ignore tides by using an average sea level and have just two sea level rise scenarios, and assume the river is always at __ feet, or maybe __ ft and __ ft. Download/build DEM, and then add your river heights to the digital elevation model (not trivial, but searching GIS Stack overflow a good start). That's a reference baseline elevation to combine sea water with. With an assumption of sea level rise, say 10 feet, that's incorporated into another DEM, one approach is raster math centric, subtracting one from the other and the result will show the new inundation areas. Once you've done the raster math, you could have a binary xy grid with either flooded or not flooded, to apply that final xy search function: is xy 1 or 0, but by far the trickiest part is all before that. There's maybe more straightforward or simplified approaches, but the system is so dyanmic so the sky is the limit for how complicated your model will be. Here's more information on the river component, that might help visualize the river starting points to which you'll add your sea water scenario(s) https://www.usgs.gov/mission-areas/water-resources/science/flood-inundation-mapping-science?qt-science_center_objects=0#qt-science_center_objects
The library raster might be a good start, that will read in downloaded raster/grid files, like .tif, and also perform the raster math you'd need - adding/subtracting same size rasters together. Or forgetting all this processing, maybe you could just read in pre-processed rasters of such scenarios done by others, then do your search on them. There's probably a good number for certain sea level rises, but it just gets much trickier if you want to assume both sea level and river elevation scenarios.

Feature engineering of X,Y coordinates in neighborhoods of San Francisco

I am participating in a starter Kaggle competition(Crimes in San Francisco) in which I want to predict the category of a crime using a bunch of predictor variables including X and Y coordinates of a crime. As I doubt of the predictive power of the coordinates, I want to transform these variables to something more relevant to the crime category.
So I am thinking that if I had the neighbourhood of San Francisco in which the crime took place, it would be more informative than the actual coordinates of the crime. I can find the neighbourhoods online but of course I cant use the borders of each neighbour to classify the corresponding crime because their shapes are not rectangular or anything like that.
Does anyone have any idea about how I could solve this one?
Thanks guys
Well that's interesting AntoniosK and it's getting close to what I want to accomplish. The problem is that the information " south-east and 2km from city center" can lead to more than one neighborhoods.
I am still thinking that the partition of the city in neighborhoods is valuable because the socio-economic and structural differences between them ( there is a reason why the neighborhoods of each city are separated as such, right?) can lead to a higher probability for a certain category crime and a lower one for another.
That said, your idea made me thinking of using the south-east etc mapping and then use the angle of the segment(point to city center) with x axis to map the point to appropriate neighborhood. I am on it right now. Thanks
After some time on the problem I found that the procedure I want to perform is titled " reverse geocoding". It also turns out that there are some api's to solve this. The best according to my opinion is revgeocode() function contained in ggmap package(google's edition). This one though has a query limit per day(2500 queries) unless you pay for extra.
The one that I turned to though is geonames package and GNneighbourhood function that turns coordinates to neighbours. It is free, though I have experienced some errors(keep in mind that this one is only for US and Canada cities)
revgeocode function-ggmap package
Gnneighbourhood-geonames package

Divide a city into regions in Google Maps

I am trying to divide a certain city into several blocks, each representing North, North-West, North-East, South...and so on. I just need the coordinates of the region boundaries (e.g.: North is between X and Y latitude and between Z and T longitude), so that I can check in my app whether a point belongs to a region or another. The regions should not depend on a certain zoom level's boundaries and they don't need to be the same size (maybe the North part of a city is a little bit larger then the South one).
Any idea how can I "draw" these region boundaries? Thank you!
For boundary data, you would have to do a search. Depends on the city and country. In the US, many municipalities provide this data directly through a city or country web site. Generally it will be in a GIS data format such as a shapefile. You have a number of different options for working programmatically with GIS data formats. I recommend using the GDAL libraries,
particularly ogr2ogr. Once you've got the boundary data, you can draw it on the map using polyline overlays or create a raster images of the data, say using gdal_rasterize. Or you can convert the data to KML using ogr2ogr, and upload it to Google Fusion Tables using Google Docs and overlay it using a FusionTablesLayer.

Resources