I am trying to divide my wind direction data into four equal frequency bins, without having a fixed break at 0° / 360°.
I am aware of the equal_freq() function from the funModeling package, but this function does not take the circular nature of the wind data into account and calculates the breaks by starting at 0° and ending at 360°.
Is there a way to calculate dynamic equal frequencies that can span over the null jump between 0 and 360 degrees?
Here is a minimal reproducible example:
wind_dirs<-runif(n=2000, min=0, max=360) #create a homogenous wind direction distribution
equal_freq(wind_dirs, 4) #this has a fixed break at 0° / 360°
I tried the circular package in R, but there is no function for equal frequency binning.
I also considered manually defining a breakpoint, e.g. the most prevailing direction, but this creates about the same problem as a break at 0°.
Any ideas are greatly appreciated.
I have a situation in my game. I am experimenting with terrain generation.
I have a bunch of peaks, whose position and elevation i know.
I have a point which is surrounded by all these peaks. I know its position. I am trying to calculate the elevation of this point.
I would like to calculate the height of this point, based on how close/far it is to each of these peaks, and the elevation of each of these peaks.
Example:
Peak 1 is at (0,0), with an elevation of 500
Peak 2 is at (100,100), with an elevation of 1000
Peak 3 is at (0,100), with an elevation of 750
If my point is at (99,99), i want the elevation of this point to be as close to 1000.
What is the name of this problem?
If you already have a solution to this, that too will be much appreciated.
Note: In addition, it will be helpful if the formula/equation also allows me to generate negative elevations. for example, a point midway between all the peaks could as well be under sea level. Any formula i can menatally think of usually gives me just positive results. I assume some kind of 'Slope' must be considered to allow this.
One equation i though of so far is
P1.height * (Sum of all distances - distance from P1)/(Sum of all distances) +
P2.height * (Sum of all distances - distance from P2)/(Sum of all distances) +
... Pn.height * (Sum of all distances - distance from Pn)/(Sum of all distances)
Thank you.
To draw the peaks your game needs to convert the coordinates of the peaks to screen coordinates.
Such calculation is usually done by multiplying a matrix with the vector containing the coordinates (in java AWT such matrix would be called a transform).
What you need is the inverse of that matrix so that you can apply it to your screen coordinates.
So the solution is:
get the matrix that is used for rendering the terrain
calculate the inverse matrix
apply it to your screen coordinates
And it might even be more efficient not to use the original matrix to calculate the inverse matrix but use the parameters (zero point, scale factors and rotation angle) which were used to calculate the original matrix. The same parameters can be used to calculate the inverse matrix.
I am using a multibeam echosounder to create a raster stack in R with layers all in the same resolution, which I then convert to a data frame so I can create additive models to describe the distribution of fish around bathymetry variables (depth, aspect, slope, roughness etc.).
The issue I have is that I would like to keep my resonse variable (fish school volume) fine and my predictive variables (bathymetry) coarse, such that I have say 1 x 1m cells representing the distribution of fish schools and 10 x 10m cells representing bathymetry (so the coarse cell is divisible by the fine cell with no remainder).
I can easily create these rasters individually but relating them is the problem. As each coarser cell would contain 10 x 10 = 100 finer cells, I am not sure how to program this into R so that the values are in the right location relative to an x and a y column (for cell addresses). But I realise in this case, I would need each coarse cell value to be repeated 100 times in the data frame.
Any advice would be greatly appreciated! Thanks!
I can not find this information in the reference literature [1]
1)how adaptative.density() (package spatstat) manage duplicated spatial points. I have duplicated points exactly in the same position because I am combining measurements from different years, and I am expecting that the density curve is higher in those areas but I am not sure about it.
2) is the default value of f in adaptative.density() f=0 or f=1?
My guess is that it is f=0, so it is doing an adaptive estimate by calculating the intensity estimate at every location equal to the average intensity (number of points divided by window area)
Thank you for your time and input!
The default value of f is 0.1 as you can see from the "Usage" section in the help file.
The function subsamples the point pattern with this selection probability and uses the resulting pattern to generate a Dirichlet tessellation (if there are duplicated points here they are ignored). The other fraction of points (1-f) is used to estimate the intensity by the number of points in each tile of the tessellation divided by the corresponding area (here duplicated points count equally to the total count in the tile).
I have created a 3D plot (a surface) using wireframe function. I wonder if there is any functions by which I can calculate the volume under the surface in a 3D plot?
Here is a sample of my data plus the wrieframe syntax I used to create my 3D (surface) plot:
x1<-c(13,27,41,55,69,83,97,111,125,139)
x2<-c(27,55,83,111,139,166,194,222,250,278)
x3<-c(41,83,125,166,208,250,292,333,375,417)
x4<-c(55,111,166,222,278,333,389,445,500,556)
x5<-c(69,139,208,278,347,417,487,556,626,695)
x6<-c(83,166,250,333,417,500,584,667,751,834)
x7<-c(97,194,292,389,487,584,681,779,876,974)
x8<-c(111,222,333,445,556,667,779,890,1001,1113)
x9<-c(125,250,375,500,626,751,876,1001,1127,1252)
x10<-c(139,278,417,556,695,834,974,1113,1252,1391)
df<-data.frame(x1,x2,x3,x4,x5,x6,x7,x8,x9,x10)
df.matrix<-as.matrix(df)
wireframe(df.matrix,
aspect = c(61/87, 0.4),scales=list(arrows=FALSE,cex=.5,tick.number="10",z=list(arrows=T)),ylim=c(1:10),xlab=expression(phi1),ylab="Percentile",zlab=" Loss",main="Random Classifier",
light.source = c(10,10,10),drape=T,col.regions = rainbow(100, s = 1, v = 1, start = 0, end = max(1,100 - 1)/100, alpha = 1),screen=list(z=-60,x=-60))
Note: my real data is a 100X100 matrix
Thanks
The data you are feeding to wireframe is a grid of values. Hence one estimate of the volume of whatever underlying surface this is approximating is the sum of the grid values multiplied by the grid cell areas. This is just like adding up the heights of histogram bars to get the number of values in your histogram.
The problem I see with you doing this on your data is that the cell areas are going to be in odd units - percentiles on one axis, phi on the other has unknown units, so your volume is going to have units of loss times units of percentile times units of phi.
This isn't a problem if you want to compare volumes of similar things on exactly the same grid, but if you have surfaces on different grids (different values of phi, or different percentiles) then you need to be careful.
Now, noting that wireframe doesn't draw like a 3d histogram would (looking like square tower blocks) this gives us another way to estimate the volume. Your 10x10 matrix is plotted as 9x9 squares. Divide each of those squares into triangles and then compute the volume of the 192 right truncated triangular prisms (I think this is what they are - they are equilateral triangular prisms with a right angle and one sloping end). The formula for that should be out there somewhere. Probably base area times height to the centroid of the triangle or something.
I thought maybe this would be in the raster package, but it isn't. There's code for computing the surface area but not the volume! I'm sure the raster maintainer would be happy to have some code for this!
If the points are arbitrary (ie, don't follow smooth function), it seems like you're looking for the volume of the convex hull (minimum surface) surrounding these points. One package to help you calculate this is alphashape3d.
You'll need a 3-column matrix of the coordinates to form the right type of object to make the calculation but it seems rather straight-forward.