How to simulate river level rise in R - r

I need to make a simulation to see what areas would be affected if the sea level rises in X meters. Could anyone give me tips were to start? I've search for tools embedded in the google maps API but didn't find any workaround.
The idea is to create a function such as this:
isAffected <- function( coordinate, metersRised)
---- return True if it is affected, false otherwise
Thanks in advance!

First reaction is I can't see there being any quick straightforward solution with off the shelf R libraries/data sets on top of which to build a function like that. Second is wondering if you'd like to model it or rely on already developed products, or something in the middle. The most rigorous would be applying a hydrodynamic model and the other bookend is sampling someone else's grid of anticipated results.
Just for context, For river level affected by sea level rise near the coast, you may want to consider variable river stages if they vary quite a bit. If the rivers are running high due to recent storms or snowmelt events, it will worsen flooding from sea level rise alone. So maybe you could assume a limited number of river heights (say rainy season - high, dry season - low). Tides complicate things too, as do storms and storm surge - basically above average ocean heights due to the temporary very low pressure. An example worst cast scenario with those three components is, how much of x city (regional coastline) would be flooded, say New Orleans or Australian coast, during storm surge, a high tide, and the local river very full from spring snowmelt, with 5 feet more extra sea level added, so lots of data needs to consider - eg you may want some sort of x,y,z data for those river height assumptions. Lots of cities have inundation maps where you can get those river stage elevations. The bigger the sea level rise assumption, the less the rivers might matter. Eg, a huge sea level rise scenario could easily inundate the whole city as it is today, no matter how high the river is, with the mouth of the river moving miles inland.
Simplyifying things, I'd say the most important data will be the digital elevation model (DEM), probably a raster file of x,y,z coordinates, with z being the key piece - the elevation of a pixel at every xy location above some certain datum. Higher resolution DEMs will give much more detailed and realistic inundation. Processed LiDAR data is maybe ideal - very high resolution data that some else has produced - raw LiDAR data is a burden. There's at least some here for New Zealand - http://opentopo.sdsc.edu/datasets - but I'm not sure of good warehouses for data outside the US.
A basic workflow might be, decide what hydraulic components you'll consider and how many scenarios. Eg, you'll ignore tides by using an average sea level and have just two sea level rise scenarios, and assume the river is always at __ feet, or maybe __ ft and __ ft. Download/build DEM, and then add your river heights to the digital elevation model (not trivial, but searching GIS Stack overflow a good start). That's a reference baseline elevation to combine sea water with. With an assumption of sea level rise, say 10 feet, that's incorporated into another DEM, one approach is raster math centric, subtracting one from the other and the result will show the new inundation areas. Once you've done the raster math, you could have a binary xy grid with either flooded or not flooded, to apply that final xy search function: is xy 1 or 0, but by far the trickiest part is all before that. There's maybe more straightforward or simplified approaches, but the system is so dyanmic so the sky is the limit for how complicated your model will be. Here's more information on the river component, that might help visualize the river starting points to which you'll add your sea water scenario(s) https://www.usgs.gov/mission-areas/water-resources/science/flood-inundation-mapping-science?qt-science_center_objects=0#qt-science_center_objects
The library raster might be a good start, that will read in downloaded raster/grid files, like .tif, and also perform the raster math you'd need - adding/subtracting same size rasters together. Or forgetting all this processing, maybe you could just read in pre-processed rasters of such scenarios done by others, then do your search on them. There's probably a good number for certain sea level rises, but it just gets much trickier if you want to assume both sea level and river elevation scenarios.

Related

find out which sampling points are in the same geographical rectangle and extract this information

this is my first time asking a question here. I hope I arrive to formulate it precise enough.
I'm a marine biologist working with biological data sampled at different sites in the North Sea and the English Channel. My data constitute of the longitude and latitude of every sampling site as well a name / number of each sampling site arranged in columns.
The sampling area is devided in statisticle rectangles according to the CRS grid measuring one degree longitude and 0.5 degree latitude. I want to know which sampling sites are in the same statisticle rectangle and to extract this information as additional column in my dataset.
I tried to use the code provided here: https://gis.stackexchange.com/questions/210092/plotting-square-grids-on-a-map-and-extracting-each-grid-information-using-r
and to adapt it to my purpose, but I do not succeed. Basically I am stucked to create a grid that represents the grid of the world map zoomed in to my study region and with the grid cell size described above as a SpatialGrid object.
Can someone help me with this or has a different idea how to approach my objective?
Thank you very much and have a nice day!

Compare Survey Results Across Regions

I have results from a survey of nurse practitioners asking to what degree (Likert scale, values from 1-5) they feel certain barriers prevent them from adequate practice (i.e. time constraint, location restrictions, etc.). They were also asked to locate where in the state they practice (fill in a bubble). I was wondering if there was a way to code a picture of a U.S. state (say Texas) and superimpose the survey results onto the map by region?
For example: Say one nurse indicated a 1 for feeling time constrained, and she was from the Southern region of Texas. Then, I wold like to show that, out of say a sample of 100, that 1% who responded with a value of 1 came from the Southern region, and have that appear on a map of Texas. Does that make sense?

Feature engineering of X,Y coordinates in neighborhoods of San Francisco

I am participating in a starter Kaggle competition(Crimes in San Francisco) in which I want to predict the category of a crime using a bunch of predictor variables including X and Y coordinates of a crime. As I doubt of the predictive power of the coordinates, I want to transform these variables to something more relevant to the crime category.
So I am thinking that if I had the neighbourhood of San Francisco in which the crime took place, it would be more informative than the actual coordinates of the crime. I can find the neighbourhoods online but of course I cant use the borders of each neighbour to classify the corresponding crime because their shapes are not rectangular or anything like that.
Does anyone have any idea about how I could solve this one?
Thanks guys
Well that's interesting AntoniosK and it's getting close to what I want to accomplish. The problem is that the information " south-east and 2km from city center" can lead to more than one neighborhoods.
I am still thinking that the partition of the city in neighborhoods is valuable because the socio-economic and structural differences between them ( there is a reason why the neighborhoods of each city are separated as such, right?) can lead to a higher probability for a certain category crime and a lower one for another.
That said, your idea made me thinking of using the south-east etc mapping and then use the angle of the segment(point to city center) with x axis to map the point to appropriate neighborhood. I am on it right now. Thanks
After some time on the problem I found that the procedure I want to perform is titled " reverse geocoding". It also turns out that there are some api's to solve this. The best according to my opinion is revgeocode() function contained in ggmap package(google's edition). This one though has a query limit per day(2500 queries) unless you pay for extra.
The one that I turned to though is geonames package and GNneighbourhood function that turns coordinates to neighbours. It is free, though I have experienced some errors(keep in mind that this one is only for US and Canada cities)
revgeocode function-ggmap package
Gnneighbourhood-geonames package

The unit of area in R package {UScensus2010}

I am using the {UScensus2010} package in R and trying to get the area for each county. I found the areaPoly() in the package. Does anyone know the unit of the area? Is it square mile?
Thank you.
Assuming you are using US Census data, this is from the explanation of the dataset that UScensus2010 links to:
Land area measurement in square meters. The accuracy of the area
measurement is limited by the inaccuracy inherent in the mapping of
the various boundary features in the Census Bureau’s geographic
database. Land area includes areas classified as intermittent water,
swamps, and glaciers, which appear on census maps and in the Census
Bureau’s geographic database as hydrographic features. Square miles
can be derived by dividing square meters by 2,589,988. See Appendix A,
“Geographic Terms and Concepts,” for definition of this field.
http://www.census.gov/prod/cen2010/doc/sf1.pdf
If you are still unsure, pick your home county and check it against the area that wikipedia or the official county website claims.

Approaches for spatial geodesic latitude longitude clustering in R -- Follow-Up

Mine are follow-ups to the question & answer in Approaches for spatial geodesic latitude longitude clustering in R with geodesic or great circle distances.
I would like to better understand:
Question #1: If all the lat / long values are within the same city, is it necessary to use either fossil or distHaversine(...) to first calculate great circle distances ?
or, within a single city, is it OK to run clustering on the lat/long values themselves ?
Question #2: jlhoward suggests that :
It's worth noting that these methods require that all points must go into some cluster. If you just ask which points are close together, and allow that some cities don't go into any cluster, you get very different results.
In my case I would like to ask just ask "which points are close together", without forcing every point into a cluster. How can I do this ?
Question #3: To include one or two factor variables into the clustering (in addition to lat/long), is it as easy as including those factor variables in the df upon which the clustering is run ?
Please confirm.
Thanks!
"within a single city, is it OK to run clustering on the lat/long values themselves ?"
Yes, as long as your city is on the equator, where a degree of longitude is the same distance as a degree of latitude.
I'm standing very close to the north pole. One degree of longitude is 1/360 of the circumference of the circle round the pole from me. Someone ten degrees east of me might only be ten feet away. Someone one degree south of me is miles away. A clustering algorithm based on lat-long would think that guy miles away was closer to me than the guy I can wave to ten degrees east of me.
The solution for small areas to save having to compute great-circle ellipsoid distances is to project to a coordinate system that is near-enough cartesian so that you can use pythagoras' theorem for distance without too much error. Typically you would use a UTM zone transform, which is essentially a coordinate system that puts its equator through your study area.
The spTransform function in sp and rgdal will sort this out for you.

Resources