Accessing the wind speed at each turbine - floris

I am looking for a way to access the wind speed in each of my turbines. Even though, as far I am concerned, floris works for only one wind speed as input, you should have a way to be able to see the wind speeds at waked turbines, as you need to calculate it in the end to reach the overall wind farm power. Hence, I went to the documentation and what I could find was that on floris.simulation.farm you have a getter that should be able to return a list of the wind speeds over the wind farm. This can be achieved by:
wind_speed = floris.farm.wind_speed()
However, when I try to follow the instructions, I get only one integer, which is the wind speed that was set. So, is it possible to get the value of the wind speed at each turbine?

With v1.1.4 of FLORIS, the code:
wind_speed = floris.farm.wind_speed()
returns the wind speed of the wind farm, which as you state is the same as the wind speed that was set either in the input file or in code. This is because this is a property of the farm class that is meant to return the farm level wind speed (see the source code for the definition of the property).
Get Turbine Velocities Directly From FLORIS Object
In order to get the wind speed at individual turbines, you can use:
turbine_wind_speeds = [turb.average_velocity for turb in floris.farm.turbines]
which will return a list containing the average velocity of each turbine.
Get Turbine Coordinates Directly From FLORIS Object
The velocities are returned in the order that the turbine locations were specified. To know the specific turbine that a velocity is associated with, you can get the turbine coordinates from the turbine map:
turbine_coords = [(coord.x1, coord.x2) for coord in floris.farm.turbine_map.coords]
The first velocity returned in turbine_wind_speeds is the velocity at the first set of turbine coordinates in turbine_coords, and so on.
Example Script Showing Methods
A full script showing the usage of these methods and their outputs can be found below, using the example input file included in the FLORIS examples folder, and the FlorisInterface class which has several other helper methods for interacting with FLORIS objects.
Example Python Script
# Import the tools module of FLORIS
import floris.tools as wfct
# Initialize the FLORIS interface 'fi'
fi = wfct.floris_utilities.FlorisInterface("example_input.json")
# Calculate wake
fi.calculate_wake()
# Retrieve and print the individual turbine velocities
turbine_wind_speeds = [turb.average_velocity for turb in fi.floris.farm.turbines]
print('turbine_wind_speeds: ', turbine_wind_speeds)
# Retrieve and print the turbine coordinates
turbine_coords = [(coord.x1, coord.x2) for coord in fi.floris.farm.turbine_map.coords]
print('turbine_coords: ', turbine_coords)
Example Output
turbine_wind_speeds: [7.973632994592287, 5.572642539922095, 7.973632994592287, 5.572642539922095]
turbine_coords: [(0.0, 0.0), (800.0, 0.0), (0.0, 630.0), (800.0, 630.0)]

Related

Defining temporal resolution in Comsol post-processing

I have a time dependent heat conduction simulation and need to plot the average temperature of some area over time. However, the exported table data apparently uses only a few data points and interpolates in between.
More specifically, I have some block of material (aluminum) that is heated periodically at some surface. I am now interested in temperature peaks at exactly this surface over time. I have defined the heating function, the surface, and have calculated the average temperature of the surface under observation over time. However, when I plot the exported data
the temperature data is really, REALLY coarse. The heating data however is very fine. Comsol seems to interpolate between very few points. Calculating with a finer temporal resolution won't fix it.
How can I tell Comsol to evaluate the temperature at every step?
OK, I found the answer:
https://www.comsol.com/support/knowledgebase/1254
Turns out the timesteps the solver chooses are completely separate from the ones the user can define for the simulation. This honestly makes me question the usefulness of the initial definition of the timesteps. It really seems to be just an additional hoop for people to be dependent on support....
Solution:
Turn the maximum timestep in Solution/Time Dependent Solver to an acceptable minimal value.

How to set the position update interval with HERE SDK?

For a GPS app, I would like to set the interval for the position update, or simply know the frequency which is used. How to do with HERE SDK?
I ask this question because I would like to manage some operations in the OnPositionChangedListener function. So, that will depend on the interval. Maybe I'll need to use a timer in order to have more control.
In comparison, UWP apps manage that with the properties Geolocator.ReportInterval and Geolocator.DesiredAccuracy.
wait() method can be used to set or specify the time interval at which the density of coordinates need to be captured.
public abstract static class NavigationManager.GpsSignalListener
extends java.lang.Object
The accuracy of the GPS coordinates depends on many factors, with the result that the computed latitude and longitude can be from 0.5 meters to up to 40 meters away from the actual position. It is often impossible to determine a GPS position inside tunnels, inside buildings or in urban canyons. GPS receivers cannot provide a maximum deviation and thus, for example, are unable to indicate that the calculated location lies within 3.5 meters of the actual location with 95% probability. GPS receivers only provide an HDOP/VDOP/PDOP... value (horizontal/vertical dilution of precision) which indicates that the computed coordinates of a location cannot be more accurate than this value considering the number and/or position of satellites and the mathematical algorithms. This minimum error cannot be used to estimate the maximum error.
GPS heading and speed are computed by the receiver device based on the last several sets of GPS coordinates. The accuracy of the calculation depends on the actual speed of the vehicle and becomes unreliable if the vehicle speed drops below ~10 km/h. At low speeds, even the positional accuracy declines considerably, resulting in large random point clouds in some situations.
Please refer for more details below :
https://developer.here.com/documentation/android-premium/api_reference_java/index.html

Game Maker Get Point On Path

I'm combining A* pathfinding with a steering AI so I can make the movement look more smooth and natural. To do this, I'm calculating the path from the enemy to the player and using checkpoints on the path to have the steering AI move to. However, from what I have seen the only way to get the x and y values of a certain point on a path, you need to use path_get_point_x(path, n) to get the x coord for the nth point of the path. But, from what I've seen, the amount of points in a path are far too low for me to accurately move the enemy around obstacles. Sometimes, the enemy goes through obstacles to get to the next point even though the path traces around the obstacle. I noticed there is a variable called path_position that is a number from 0-1 representing how far into the path you are (1 being finished). Is there a way to use that to predict where the player will be at position 0.3 of they're at position 0.25, for example?
Most of the time objects will take the quickest path, or path of least resistance. Check precise collision detection for tiles that are bordering the pathways and see if that helps keep objects out of collision objects. As for a prediction to where they will be you can multiple the speed of the object by the frames per second.

Organized point cloud from stereo

I am working with disparity maps (1024 x 768) obtained via stereo and I am able to get point clouds with XYZRGB pcl::Points. However not all pixels from the disparity map are valid depth hence there will never be 1024x768 = 786432 XYZRGB points. Fortunately I am able to save the point clouds unorganized (i.e. height=1). Unfortunately, some normal estimation methods etc, are tailored for organized pointclouds. How can I create organised pointclouds from this ?
I believe that this is not possible.
First of all unorganized point cloud (PC) is just list of points in random order written in file
On the other hand organized PC carries information of in which order orginal points were obtained by depth camera and some other information. This information is stored in lets call it grid.
Once you destroy this grid omiting some points theres no algorithm that can put it back together as it originally was
You can use other methods which provides PCL that doesnt take OPC as an argument. Result will be same as if you would use organized point cloud only little bit slower (depends on size of your input cloud)
I assume that you do have the calibration parameters that are necessary to transform the image points and their depth into 3D points, right?
In this case, you simply create a 2D point cloud and do the following for each pixel of the disparity map:
If the point is valid:
set the corresponding point in the point cloud to the 3D point
else:
set the corresponding point in the cloud to NaN (i.e. a 3D point with NaN as coordinates)

Finding a density peak / cluster centrum in 2D grid / point process

I have a dataset with minute by minute GPS coordinates recorded by a persons cellphone. I.e. the dataset has 1440 rows with LON/LAT values. Based on the data I would like a point estimate (lon/lat value) of where the participants home is. Let's assume that home is the single location where they spend most of their time in a given 24h interval. Furthermore, the GPS sensor most of the time has quite high accuracy, however sometimes it is completely off resulting in gigantic outliers.
I think the best way to go about this is to treat it as a point process and use 2D density estimation to find the peak. Is there a native way to do this in R? I looked into kde2d (MASS) but this didn't really seem to do the trick. Kde2d creates a 25x25 grid of the data range with density values. However, in my data, the person can easily travel 100 miles or more per day, so these blocks are generally too large of an estimate. I could narrow them down and use a much larger grid but I am sure there must be a better way to get a point estimate.
There are "time spent" functions in the trip package (I'm the author). You can create objects from the track data that understand the underlying track process over time, and simply process the points assuming straight line segments between fixes. If "home" is where the largest value pixel is, i.e. when you break up all the segments based on the time duration and sum them into cells, then it's easy to find it. A "time spent" grid from the tripGrid function is a SpatialGridDataFrame with the standard sp package classes, and a trip object can be composed of one or many tracks.
Using rgdal you can easily transform coordinates to an appropriate map projection if lon/lat is not appropriate for your extent, but it makes no difference to the grid/time-spent calculation of line segments.
There is a simple speedfilter to remove fixes that imply movement that is too fast, but that is very simplistic and can introduce new problems, in general updating or filtering tracks for unlikely movement can be very complicated. (In my experience a basic time spent gridding gets you as good an estimate as many sophisticated models that just open up new complications). The filter works with Cartesian or long/lat coordinates, using tools in sp to calculate distances (long/lat is reliable, whereas a poor map projection choice can introduce problems - over short distances like humans on land it's probably no big deal).
(The function tripGrid calculates the exact components of the straight line segments using pixellate.psp, but that detail is hidden in the implementation).
In terms of data preparation, trip is strict about a sensible sequence of times and will prevent you from creating an object if the data have duplicates, are out of order, etc. There is an example of reading data from a text file in ?trip, and a very simple example with (really) dummy data is:
library(trip)
d <- data.frame(x = 1:10, y = rnorm(10), tms = Sys.time() + 1:10, id = gl(1, 5))
coordinates(d) <- ~x+y
tr <- trip(d, c("tms", "id"))
g <- tripGrid(tr)
pt <- coordinates(g)[which.max(g$z), ]
image(g, col = c("transparent", heat.colors(16)))
lines(tr, col = "black")
points(pt[1], pt[2], pch = "+", cex = 2)
That dummy track has no overlapping regions, but it shows that finding the max point in "time spent" is simple enough.
How about using the location that minimises the sum squared distance to all the events? This might be close to the supremum of any kernel smoothing if my brain is working right.
If your data comprises two clusters (home and work) then I think the location will be in the biggest cluster rather than between them. Its not the same as the simple mean of the x and y coordinates.
For an uncertainty on that, jitter your data by whatever your positional uncertainty is (would be great if you had that value from the GPS, otherwise guess - 50 metres?) and recompute. Do that 100 times, do a kernel smoothing of those locations and find the 95% contour.
Not rigorous, and I need to experiment with this minimum distance/kernel supremum thing...
In response to spacedman - I am pretty sure least squares won't work. Least squares is best known for bowing to the demands of outliers, without much weighting to things that are "nearby". This is the opposite of what is desired.
The bisquare estimator would probably work better, in my opinion - but I have never used it. I think it also requires some tuning.
It's more or less like a least squares estimator for a certain distance from 0, and then the weighting is constant beyond that. So once a point becomes an outlier, it's penalty is constant. We don't want outliers to weigh more and more and more as we move away from them, we would rather weigh them constant, and let the optimization focus on better fitting the things in the vicinity of the cluster.

Resources