Function to re-orient faces in Julia? - julia

Tried to find Julia implementation of re-orienting faces of a mesh. Does an implementation exist?
Geometries from external sources not necessary have their faces oriented consistently (even if possible). If visualized in GLVisualize.jl, coloring vertices of faces not oriented consistently does not come out nicely.
Example Julia code using GLVisualize with a screenshot linked with non-consistent mesh.
using GLVisualize, GeometryTypes, GLWindow, GLAbstraction, Colors
vertices = [
0.0 0.0 0.0
10.0 0.0 0.0
10.0 20.0 0.0
0.0 20.0 0.0
0.0 0.0 5.0
10.0 0.0 5.0
10.0 20.0 5.0
0.0 20.0 5.0
]
faces = [
7 6 5 # 5 6 7 for consistent orientation
5 7 8
1 4 3
1 3 2
1 2 6
1 6 5
2 3 7
2 7 6
3 4 8
3 8 7
4 1 5
4 5 8
]
v = [ Point{3,Float32}( vertices[i,1:3] ) for i=1:size(vertices,1) ]
f = [ Face{3,UInt32}( faces[i,3:-1:1] ) for i=1:size(faces,1) ]
mesh = GLNormalAttributeMesh(
vertices=v, faces=f,
attributes=RGBA{Float32}[RGBA{Float32}( 0.0, 1.0, 0.0, 1.0 )], attribute_id=zeros( length( v ) )
)
window = glscreen()
_view( visualize( mesh ), window )
renderloop( window )
Brick from the top with non-consistent mesh
I could write a slow brute-force algorithm, which would be slow. Better to ask first.
A recent article suggests the algorithm can be O(N), see https://arxiv.org/pdf/1512.02137.pdf
Note: one should be careful not to look at source implementation of mesh re-orientation in software with non-compatible licenses.

Related

What is note_ind:ncol(dataset) mean in R?

I have this line of code but I don't know what it means especially the note_ind part.
apply(mydat[,-c(1,2,3,note_ind:ncol(dataset))],c(1,2),as.numeric)
The notation x:y is used to create numeric vector sequences where each element is the previous element incremented by 1. It is shorthand for `seq(x, y, by = 1). It is most commonly used for integer sequences, but it works on doubles also.
1:10
[1] 1 2 3 4 5 6 7 8 9 10
1.1:10.1
[1] 1.1 2.1 3.1 4.1 5.1 6.1 7.1 8.1 9.1 10.1
1.5:10.2 # sequence stops after 9.5 because 10.2 < 9.5 + 1 - seq() behaves the same way
[1] 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
Presumably note_ind is an integer value from somewhere else in your code. ncol(data.set) is the number of columns, so note_ind:ncol(dataset) generates a seqence between those two values, incrementing by 1 for each element.

how to find polygon with given x and y coordinates

I have data like this, this is coordinates system of (x,y) pairs, so the question is how to get the (x,y) points that if we connect this points other points must be inside of polygon and it must be convex polygon
x_axis y_axis id
8 14.5 1
1.1 1.1 2
3.5 3.4 3
4.5 0.4 4
13.5 7.8 5
11.5 15.2 6
2.8 8.3 7
0.3 5.4 8
1.5 3.8 9
8 8 10
8.3 8.1 11
2 10 12
5 14 13
... ... ...
Naive solution is to take the rectangle that contains all points:
select min(ap.x_axis) as x,
min(ap.y_axis) as y,
'lower_left' as desc
from all_points ap
union all
select min(ap.x_axis) as x,
max(ap.y_axis) as y,
'upper_left' as desc
from all_points ap
union all
select max(ap.x_axis) as x,
max(ap.y_axis) as y,
'upper_right' as desc
from all_points ap
union all
select max(ap.x_axis) as x,
min(ap.y_axis) as y,
'lower_right' as desc
from all_points ap
It is working solution for the problem you posted.
If you want 'the smallest' polygon, this is called 'convex hull'. The algorithm for finding it is called 'Graham scan' and it's descriped on wikipedia.
We won't code it for you.

Choose the right analysis

I have dataset in R that contains 2 group good and bad. The group good contains users that have a long lifetime and users in bad have a short lifetime.
So good contains game_id and game_played. For example good$game_id==1 (game 1) has been played good$game_played==12.5 hours.
I want to investigate if there are a difference between good and bad and see which game_id's that make the difference between good and bad.
I have 20 game_id's so I don't need Principal Component Analysis to make a reduction of game_id's. How should one make an analysis to see if some game_id's makes the difference between good and bad ?
So in R we get for good
an output like this:
game_id game_played
6 18.3
14 2.1
4 0.6
1 1.0
2 1.4
3 0.1
5 0.4
7 1.2
8 1.2
9 3.1
10 1.7
11 11.6
12 0.2
13 5.4
15 4.3
16 12.4
17 8.2
18 7.0
19 3.4
20 4.6
where game_id is the name of the game and game_played is the hours the game has been played in data good. For bad we have a similar output with difference values.

Segmenting a data frame by row based on previous rows values

I have a data frame in R that contains 2 columns named x and y (co-ordinates). The data frame represents a journey with each line representing the position at the next point in time.
x y seconds
1 0.0 0.0 0
2 -5.8 -8.5 1
3 -11.6 -18.2 2
4 -16.9 -30.1 3
5 -22.8 -40.8 4
6 -29.0 -51.6 5
I need to break the journey up into segments where each segment starts once the distance from the start of the previous segment crosses a certain threshold (e.g. 200).
I have recently switched from using SAS to R, and this is the first time I've come across anything I can do easily in SAS but can't even think of the way to approach the problem in R.
I've posted the SAS code I would use below to do the same job. It creates a new column called segment.
%let cutoff=200;
data segments;
set journey;
retain segment distance x_start y_start;
if _n_=1 then do;
x_start=x;
y_start=y;
segment=1;
distance=0;
end;
distance + sqrt((x-x_start)**2+(y-y_start)**2);
if distance>&cutoff then do;
x_start=x;
y_start=y;
segment+1;
distance=0;
end;
keep x y seconds segment;
run;
Edit: Example output
If the cutoff were 200 then an example of required output would look something like...
x y seconds segment
1 0.0 0.0 0 1
2 40.0 30.0 1 1
3 80.0 60.0 2 1
4 120.0 90.0 3 1
5 160.0 120.0 4 2
6 120.0 150.0 5 2
7 80.0 180.0 6 2
8 40.0 210.0 7 2
9 0.0 240.0 8 3
If your data set is dd, something like
cutoff <- 200
origin <- dd[1,c("x","y")]
cur.seg <- 1
dd$segment <- NA
for (i in 1:nrow(dd)) {
dist <- sqrt(sum((dd[i,c("x","y")]-origin)^2))
if (dist>cutoff) {
cur.seg <- cur.seg+1
origin <- dd[i,c("x","y")]
}
dd$segment[i] <- cur.seg
}
should work. There are some refinements (it might be more efficient to compute distances of the current origin to all rows, then use which(dist>cutoff)[1] to jump to the first row that goes beyond the cutoff), and it would be interesting to try to come up with a completely vectorized solution, but this should be OK. How big is your data set?

Computing a "rightmost" moving average?

I would like to compute a moving average (ma) over some time series data but I would like the ma to consider the order n starting from the rightmost of my series so my last ma value corresponds to the ma of the last n values of my series. The desired function rightmost_ma would produce this output:
data <- seq(1,10)
> data
[1] 1 2 3 4 5 6 7 8 9 10
rightmost_ma(data, n=2)
NA 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
I was reviewing the different ma possibilities e.g. package forecast and could not find how to cover this use case. Note that the critical requirement for me is to have valid non NA ma values for the last elements of the series or in other words I want my ma to produce valid results without "looking into the future".
Take a look at rollmean function from zoo package
> library(zoo)
> rollmean(zoo(1:10), 2, align ="right", fill=NA)
1 2 3 4 5 6 7 8 9 10
NA 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
you can also use rollapply
> rollapply(zoo(1:10), width=2, FUN=mean, align = "right", fill=NA)
1 2 3 4 5 6 7 8 9 10
NA 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
I think using stats::filter is less complicated, and might have better performance (though zoo is well written).
This:
filter(1:10, c(1,1)/2, sides=1)
gives:
Time Series:
Start = 1
End = 10
Frequency = 1
[1] NA 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
If you don't want the result to be a ts object, use as.vector on the result.

Resources