I have a bunch of data points that depict the frequency (no unit) at different times (seconds). I have several of these for different parts of a cell. I need to be able to compute the derivatives of each point in each column for all the data using R. Is this possible? Can someone point me in the right direction?
Please let me know if I am not being specific enough.
Thanks
Related
What I'm currently doing is:
Train a GNN and see which graphs are labelled wrongly compared to the ground truth.
Use a GNN-explainer model to help explain which minimum sub-graph is responsible for the mislabeling by checking the wrongly label instances.
Use the graph_edit_distance from networkx to see how much these graphs differentiate from another.
See if I can find clusters that help explain why the GNN might label some graphs wrongly.
Does this seem reasonable?
How would I go around step 4? Would I use something like sklearn_extra.cluster.KMedoids?
All help is appreciated!
Use the graph_edit_distance from networkx to see how much these graphs
differentiate from another.
Guessing this gives you a single number for any pair of graphs.
The question is: on what direction is this number? How many dimensions ( directions ) are there? Suppose two graphs have the same distance from a third. Does this mean that the two graphs are close together, forming a cluster at a distance from the third graph?
If you have answers to the questions in the previous paragraph, then the KMeans algorithm can find clusters for as many dimensions as you might have. It is fast and easy to code, usually giving satisfactory results. https://en.wikipedia.org/wiki/K-means_clustering
I would like to analyse movement data from a semi-captive animal population. We record their location every 5 mins using a location code which corresponds to a map of the reserve we have made ourselves. Each grid square represents 100 square meters, and has a letter and number to correspond with each grid square e.g. H5 or L6 (letters correlate with columns, whereas numbers correlate with rows.I would like to analyse differences in space use between three different periods of time, to answer questions such as do the animals move around more in certain periods, or are more restricted in their space use in other periods. Please can someone give me any indication of how to go about this? I have looked into spatial analysis in rstudio but haven't come across anything that doesn't use official maps or location co-ordinates. I've not done this type of analysis before so any help would be greatly appreciated! Thanks so much.
I am trying to get my code to calculate the distance between turning point locations on the two graphs I have plotted.
The code needs to compare the location of the first turning point on graph 1 to the location of the first turning point on graph 2. This step is then repeated on the remaining turning points.
The reason I am doing this is as I am trying to measure graph similarity after simplification. I will attach the code as well as the data files if anyone has anything to add.
Thanks.
This should do the job. Calculates the distance between 2 points in the cartesian space
np.sqrt(pow((pointA.x - pointB.x), 2) + pow(pointA.y - pointB.y, 2))
I'm currently working towards a 3D model of this, but I thought I would start with 2D. Basically, I have a grid of longitude and latitude with NO2 concentrations across it. What I want to produce, at least for now, is a total amount of Nitrogen Dioxide between two points. Like so:
2DGrid
Basically, These two points are at different lats and lons and as I stated I want to find the amount of something between them. The tricky thing to me is that the model data I'm working with is gridded so I need to be able to account for the amount of something along a line at the lat and lons at which that line cuts through said grid.
Another approach, and maybe a better one for my purposes, could be visualized like this:3DGrid
Ultimately, I'd like to be able to create a program (within any language honestly) that could find the amount of "something" between two points in a 3D grid. If you would like specfics, the bottom altitude is the surface, the top grid is the top of the atmosphere. The bottom point is a measurement device looking at the sun during a certain time of day (and therefore having a certain zenith and azimuth angle). I want to find the NO2 between that measurement device and the "top of the atmosphere" which in my grid is just the top altitude level (of which there are 25).
I'm rather new to coding, stack exchange, and even the subject matter I'm working with so the sparse code I've made might end up creating more clutter than purely asking the question and seeing what methods/code you might suggest?
Hopefully my question is beneficial!
Best,
Taylor
To traverse all touched cells, you can use Amanatides-Woo algorithm. It is suitable both for 2D and for 3D case.
Implementation clues
To account for quantity used from every cell, you can apply some model. For example, calculate path length inside cell (as difference of enter and exit coordinates) and divide by normalizing factor to get cell weight (for example, byCellSize*Sqrt(3) for 3D case as diagonal length).
sorry for posting this in programing site, but there might be many programming people who are professional in geometry, 3d geometry... so allow this.
I have been given best fitted planes with the original point data. I want to model a pyramid for this data as the data represent a pyramid. My approach of this modeling is
Finding the intersection lines (e.g. AB, CD,..etc) for each pair of adjacent plane
Then, finding the pyramid top (T) by intersecting the previously found lines as these lines don’t pass through a single point
Intersecting the available side planes with a desired horizontal plane to get the basement
In figure – black triangles are original best fitted triangles; red
and blue triangles are model triangles
I want to show that the points are well fitted for the pyramid model
than that it fitted for the given best fitted planes. (Assume original
planes are updated as shown)
Actually step 2 is done using weighted least square process. Each intersection line is assigned with a weight. Weight is proportional to the angle between normal vectors of corresponding planes. in this step, I tried to find the point which is closest to all the intersection lines i.e. point T. according to the weights, line positions might change with respect to the influence of high weight line. That mean, original planes could change little bit. So I want to show that these new positions of planes are well fitted for the original point data than original planes.
Any idea to show this? I am thinking to use RMSE and show before and after RMSE. But again I think I should use weighted RMSE as all the planes refereeing to the point T are influenced so that I should cope this as a global case rather than looking individual planes….. But I can’t figure out a way to show this. Or maybe I should use some other measure…
So, I am confused and no idea to show this.. Please help me…
If you are given the best-fit planes, why not intersect the three of them to get a single unambiguous T, then determine the lines AT, BT, and CT?
This is not a rhetorical question, by the way. Your actual question seems to be for reassurance that your procedure yields "well-fitted" results, but you have not explained or described what kind of fit you're looking for!
Unfortunately, without this information, your question cannot be answered as asked. If you describe your goals, we may be able to help you achieve them -- or, if you have not yet articulated them for yourself, that exercise may be enough to let you answer your own question...
That said, I will mention that the only difference between the planes you started with and the planes your procedure ends up with should be due to floating point error. This is because, geometrically speaking, all three lines should intersect at the same point as the planes that generated them.