Multiple nonlinear regression in R (# of restaurants vs # of people in region) - r

I am trying to find the relationship between the number of people that come to a certain region and the number of accommodations, shops, restaurants, and leisure places in that region. I know the number of total people whom visit a certain region but I don't know whether they visit for accommodation or to shop, etc.
So I have plotted the number of restaurants, etc, in each region by the number of people in that region. Here is the graph. Here is some of the data I'm trying to analyze
Thus, the general shape of these points is a parabola that is rotated 90 degrees. I am not very familiar with R and cannot figure out how to find this equation/know if this is not possible.
My goal is to get coefficients of each parameter (ie accomodation, restaurants, etc.) so I can conclude something like "if we add 10 restaurants, an increase of x number of people should come to the region."
Here is a snippet of some code I've tried but not succeeded
linez <- nls(People ~ sqrt(Accommodation/a) , data=fourth, start=c(a=1), trace=T)
lines(s, predict(linez, list(x=s)), col = "red")

Related

What are the names of the graphs used in the NAPLAN? Anyone know how to plot them?

In Australia we have a test for students called NAPLAN.
The results are provided in a sort of band graph mixed with a box and whisker.
Does anyone know what they are called?
They are good because they show total range, Where the student falls in the band. What the national average is and what the students class average is.
essentially 4 data points on 1 graph.

Normalizing species count data in ArcGIS Pro

I have presence points of a certain species all over the United States. I completed a spatial join between the US and said points. However, I am unsure of how to normalize the data. There is a "percent of total," but I am unsure if this is the appropriate option. Or is it as simple as just normalizing by the counts themselves?
It depends on what comparison you're trying to make with the normalized data.
If you want to look at the occurrence of that species by state, you could do a spatial join on a US States layer, then calculate a new field where the value is the species count for each state divided by the total area of the state. That would give you the normalized 'count per square mile' (or whatever unit you want).

Using st_join for a spatial join using largest intersection

I am using the sf_package to work around spatial data in r. At this stage, I want to make a spatial join so that the tax lots of my area of study inherit the attributes of the floodplain on which they are located. For example, taxlots may be located in a floodplain categorized as X, VE, A, A0, or V (these are codes that relate to the intensity of the flood in each area).
To do this, I tested the sf function st_join, which will by default rely on st_intersects to determine the spatial join for each entity of my tax lots.
However, I am trying to figure out the criteria used by the function when a tax lot intersects with two different floodplain areas (as in the photo below, in which several lots intersect both with an A floodplain and an AE floodplain). Does it take the value of the area that covers the largest area of the lot? or is it a matter of which area is located upper in the dataframe?
Note that I am not interested in partitioning the intersecting lots so that I divide them according to their areas intersecting one and other floodplain zones.
Photo of tax lots intesecting with more than one floodplain category
By default, st_join(x, y, join = st_intersects) duplicates all features in x,
that intersect with more than one features from y.
If you set the argument largest = TRUE, st_join() returns the x features augmented with the fields of y that have the largest overlap with each of the features of x.
See https://r-spatial.github.io/sf/reference/st_join.html and https://github.com/r-spatial/sf/issues/578 for more details.

options to allow heavily-weighted points on a map to overwhelm other points with low weights

what are some good kriging/interpolation idea/options that will allow heavily-weighted points to bleed over lightly-weighted points on a plotted R map?
the state of connecticut has eight counties. i found the centroid and want to plot poverty rates of each of these eight counties. three of the counties are very populated (about 1 million people) and the other five counties are sparsely populated (about 100,000 people). since the three densely-populated counties have more than 90% of the total state population, i would like those the three densely-populated counties to completely "overwhelm" the map and impact other points across the county borders.
the Krig function in the R fields package has a lot of parameters and also covariance functions that can be called, but i'm not sure where to start?
here is reproducible code to quickly produce a hard-bordered map and then three differently-weighted maps. hopefully i can just make changes to this code, but perhaps it requires something more complex like the geoRglm package? two of the three weighted maps look almost identical, despite one being 10x as weighted as the other..
https://raw.githubusercontent.com/davidbrae/swmap/master/20141001%20how%20to%20modify%20the%20Krig%20function%20so%20a%20huge%20weight%20overwhelms%20nearby%20points.R
thanks!!
edit: here's a picture example of the behavior i want-
disclaimer - I am not an expert on Krigging. Krigging is complex and takes a good understanding of the underlying data, the method and the purpose to achieve the correct result. You may wish to try to get input from #whuber [on the GIS Stack Exchange or contact him through his website (http://www.quantdec.com/quals/quals.htm)] or another expert you know.
That said, if you just want to achieve the visual effect you requested and are not using this for some sort of statistical analysis, I think there are some relatively simple solutions.
EDIT:
As you commented, though the suggestions below to use theta and smoothness arguments do even out the prediction surface, they apply equally to all measurements and thus do not extend the "sphere of influence" of more densely populated counties relative to less-densely populated. After further consideration, I think there are two ways to achieve this: by altering the covariance function to depend on population density or by using weights, as you have. Your weighting approach, as I wrote below, alters the error term of the krigging function. That is, it inversely scales the nugget variance.
As you can see in the semivariogram image, the nugget is essentially the y-intercept, or the error between measurements at the same location. Weights affect the nugget variance (sigma2) as sigma2/weight. Thus, greater weights mean less error at small-scale distances. This does not, however, change the shape of the semivariance function or have much effect on the range or sill.
I think that the best solution would be to have your covariance function depend on population. however, I'm not sure how to accomplish that and I don't see any arguments to Krig to do so. I tried playing with defining my own covariance function as in the Krig example, but only got errors.
Sorry I couldn't help more!
Another great resource to help understand Krigging is: http://www.epa.gov/airtrends/specialstudies/dsisurfaces.pdf
As I said in my comment, the sill and nugget values as well as the range of the semivariogram are things you can alter to affect the smoothing. By specifying weights in the call to Krig, you are altering the variance of the measurement errors. That is, in a normal use, weights are expected to be proportional to the accuracy of the measurement value so that higher weights represent more accurate measurements, essentially. This isn't actually true with your data, but it may be giving you the effect you desire.
To alter the way your data is interpolated, you can adjust two (and many more) parameters in the simple Krig call you are using: theta and smoothness. theta adjusts the semivariance range, meaning that measured points farther away contribute more to the estimates as you increase theta. Your data range is
range <- data.frame(lon=range(ct.data$lon),lat=range(ct.data$lat))
range[2,]-range[1,]
lon lat
2 1.383717 0.6300484
so, your measurement points vary by ~1.4 degrees lon and ~0.6 degrees lat. Thus, you can play with specifying your theta value in that range to see how that affects your result. In general, a larger theta leads to more smoothing since you are drawing from more values for each prediction.
Krig.output.wt <- Krig( cbind(ct.data$lon,ct.data$lat) , ct.data$county.poverty.rate ,
weights=c( size , 1 , 1 , 1 , 1 , size , size , 1 ),Covariance="Matern", theta=.8)
r <- interpolate(ras, Krig.output.wt)
r <- mask(r, ct.map)
plot(r, col=colRamp(100) ,axes=FALSE,legend=FALSE)
title(main="Theta = 0.8", outer = FALSE)
points(cbind(ct.data$lon,ct.data$lat))
text(ct.data$lon, ct.data$lat-0.05, ct.data$NAME, cex=0.5)
Gives:
Krig.output.wt <- Krig( cbind(ct.data$lon,ct.data$lat) , ct.data$county.poverty.rate ,
weights=c( size , 1 , 1 , 1 , 1 , size , size , 1 ),Covariance="Matern", theta=1.6)
r <- interpolate(ras, Krig.output.wt)
r <- mask(r, ct.map)
plot(r, col=colRamp(100) ,axes=FALSE,legend=FALSE)
title(main="Theta = 1.6", outer = FALSE)
points(cbind(ct.data$lon,ct.data$lat))
text(ct.data$lon, ct.data$lat-0.05, ct.data$NAME, cex=0.5)
Gives:
Adding the smoothness argument, will change the order of the function used to smooth your predictions. The default is 0.5 leading to a second-order polynomial.
Krig.output.wt <- Krig( cbind(ct.data$lon,ct.data$lat) , ct.data$county.poverty.rate ,
weights=c( size , 1 , 1 , 1 , 1 , size , size , 1 ),
Covariance="Matern", smoothness = 0.6)
r <- interpolate(ras, Krig.output.wt)
r <- mask(r, ct.map)
plot(r, col=colRamp(100) ,axes=FALSE,legend=FALSE)
title(main="Theta unspecified; Smoothness = 0.6", outer = FALSE)
points(cbind(ct.data$lon,ct.data$lat))
text(ct.data$lon, ct.data$lat-0.05, ct.data$NAME, cex=0.5)
Gives:
This should give you a start and some options, but you should look at the manual for fields. It is pretty well-written and explains the arguments well.
Also, if this is in any way quantitative, I would highly recommend talking to someone with significant spatial statistics know how!
Kriging is not what you want. (It is a statistical method for accurate--not distorted!--interpolation of data. It requires preliminary analysis of the data--of which you do not have anywhere near enough for this purpose--and cannot accomplish the desired map distortion.)
The example and the references to "bleed over" suggest considering an anamorph or area cartogram. This is a map which will expand and shrink the areas of the county polygons so that they reflect their relative population while retaining their shapes. The link (to the SE GIS site) explains and illustrates this idea. Although its answers are less than satisfying, a search of that site will reveal some effective solutions.
lot's of interesting comments and leads above.
I took a look at the Harvard dialect survey to get a sense for what you are trying to do first. I must say really cool maps. And before I start in on what I came up with...I've looked at your work on survey analysis before and have learned quite a few tricks. Thanks.
So my first take pretty quickly was that if you wanted to do spatial smoothing by way of kernel density estimation then you need to be thinking in terms of point process models. I'm sure there are other ways, but that's where I went.
So what I do below is grab a very generic US map and convert it into something I can use as a sampling window. Then I create random samples of points within that region, just pretend those are your centroids. After I attach random values to those points and plot it up.
I just wanted to test this conceptually, which is why I didn't go through the extra steps to grab cbsa's and also sorry for not projecting, but I think these are the fundamentals. Oh and the smoothing in the dialect study is being done over the whole country. I think. That is the author is not stratifying his smoothing procedure within polygons....so I just added states at the end.
code:
library(sp)
library(spatstat)
library(RColorBrewer)
library(maps)
library(maptools)
# grab us map from R maps package
usMap <- map("usa")
usIds <- usMap$names
# convert to spatial polygons so this can be used as a windo below
usMapPoly <- map2SpatialPolygons(usMap,IDs=usIds)
# just select us with no islands
usMapPoly <- usMapPoly[names(usMapPoly)=="main",]
# create a random sample of points on which to smooth over within the map
pts <- spsample(usMapPoly, n=250, type='random')
# just for a quick check of the map and sampling locations
plot(usMapPoly)
points(pts)
# create values associated with points, be sure to play aroud with
# these after you get the map it's fun
vals <-rnorm(250,100,25)
valWeights <- vals/sum(vals)
ptsCords <- data.frame(pts#coords)
# create window for the point pattern object (ppp) created below
usWindow <- as.owin(usMapPoly)
# create spatial point pattern object
usPPP <- ppp(ptsCords$x,ptsCords$y,marks=vals,window=usWindow)
# create colour ramp
col <- colorRampPalette(brewer.pal(9,"Reds"))(20)
# the plots, here is where the gausian kernal density estimation magic happens
# if you want a continuous legend on one of the sides get rid of ribbon=FALSE
# and be sure to play around with sigma
plot(Smooth(usPPP,sigma=3,weights=valWeights),col=col,main=NA,ribbon=FALSE)
map("state",add=TRUE,fill=FALSE)
example no weights:
example with my trivial weights
There is obviously a lot of work in between this and your goal of making this type of map reproducible at various levels of spatial aggregation and sample data, but good luck it seems like a cool project.
p.s. initially I did not use any weighting, but I suppose you could provide weights directly to the Smooth function. Two example maps above.

How to infer a relation/correlation between two spatial points using R

I am quite new to the area of spatial statistics, but I'm very interested. For learning and demo purposes, I've created three datsets.
Dataset - Persons: This describes individuals at a certain location with a few variables. Please note, that the persons are located in the provided cities. A short explanation:
POINT_X: X-coordinate of city.
POINT_Y: Y-coordinate of city.
city: The name of the city, where they live.
ill: "1" states that they are ill. For learning purposes, all persons are ill.
job: If they have a job or not. "1" means: they have one, "0" means they haven't got one.
disnw: The distance to the nearest waterpoint.
wID: not relevant.
Dataset - City: This describes a number of cities including some variables. A short explanation of these:
city: The name of the city.
population: The population of the city.
POINT_X: X-coordinate of city.
POINT_Y: Y-coordinate of city.
ill: Number of ill persons in the city.
notill: Number of healthy persons in the city.
disnw: The distance (in km) to the nearest waterfeature.
wID: not relevant
rate_ill: The rate of ill persons in the city.
rate_notill: The rate of healthy persons in the city.
Dataset - Waterfeatures: . Please note that the viallages are on the same location as persons. This is a collection of spatial points, which describes waterfeatures.
POINT_X: X-coordinate of a waterfeature.
POINT_Y: Y-coordinate of a waterfeature.
geographic overview about the setting (red are persons, blue are waterfeatures, yellow are cities)
Now I want to check the hypothesis that cities, which are nearer to waterfeatures (so where the variable disnw is lower), have a higher number of ill persons. So is there a correlation between the number of ill persons/rate of ill persons and the proximity to water features. I know, that the datasets are possibly not representative or suitable for my hyptothesis, but for now this fact shouldn't matter.
I've already looked at some functions and packages, but I'm very unsure about a suitable method. Methods, which might be useful (at least from my point of view): semivariogram, variogram, Ripley's K function, G-Function, correlation coefficient.
To give you a better overview, I've created example datasets. You can find these here:
persons = read.csv("http://pastebin.com/raw.php?i=3aMGi9Ax", header = TRUE, stringsAsFactors=FALSE)
city = read.csv("http://pastebin.com/raw.php?i=Lk3KXLQT", header = TRUE, stringsAsFactors=FALSE)
water = read.csv("http://pastebin.com/raw.php?i=hQRvMZwE", header = TRUE, stringsAsFactors=FALSE)
It would be awesome to get some input from your side. Maybe you have a tip, how to perform this kind of analysis.
Thanks in advance!

Resources