heat transfer for spherical coordinates boundary conditions implementation - r

I want to apply heat transfer ( heat conduction and convection) for a hemisphere. It is a transient homogeneous heat transfer in spherical coordinates. There is no heat generation. Boundary conditions of hemisphere is in the beginning at Tinitial= 20 degree room temperature. External-enviromental temperature is -22 degree. You can imagine that hemisphere is a solid material. Also, it is a non-linear model, because thermal conductivity is changing after material is frozen, and this is going to change the temperature profile.
I want to find the temperature profile of this solid during a certain time until center temperature reach to -22 degree.
In this case, Temperature depends on 3 parameters : T(r,theta,t). radius, angle, and time.
1/α(∂T(r,θ,t))/∂t =1/r^2*∂/∂r(r^2(∂T(r,θ,t))/∂r)+ 1/(r^2*sinθ )∂/∂θ(sinθ(∂T(r,θ,t))/∂θ)
I applied finite difference method using matlab, However, boundary conditions have issues. There are convection on surface of the hemisphere, and conduction in the inner nodes, bottom of the hemisphere has constant temperature which is air temperature (-22). You can see the scripts which i am using for BCs in the matlab file.
% Temperature at surface of hemisphere solid boundary node
for i=nodes
for j=1:1:(nodes-1)
Qcd_ot(i,j)= ((k(i,j)+ k(i-1,j))/2)*A(i-1,j)*(( Told(i,j)-Told(i-1,j))/dr); % heat conduction out of node
Qcv(i,j) = h*(Tair-Told(i,j))*A(i,j); % heat transfer through convectioin on surface
Tnew(i,j) = ((Qcv(i,j)-Qcd_ot(i,j))/(mass(i,j)*cp(i,j))/2)*dt + Told(i,j);
end % end of for loop
end
% Temperature at inner nodes
for i=2:1:(nodes-1)
for j=2:1:(nodes-1)
Qcd_in(i,j)= ((k(i,j)+ k(i+1,j))/2)*A(i,j) *((2/R)*(( Told(i+1,j)-Told(i,j))/(2*dr)) + ((Told(i+1,j)-2*Told(i,j)+Told(i-1,j))/(dr^2)) + ((cot(y)/(R^2))*((Told(i,j+1)-Told(i,j-1))/(2*dy))) + (1/(R^2))*(Told(i,j+1)-2*Told(i,j)+ Told(i,j-1))/(dy^2));
Qcd_out(i,j)= ((k(i,j)+ k(i-1,j))/2)*A(i-1,j)*((2/R)*(( Told(i,j)-Told(i-1,j))/(2*dr)) +((Told(i+1,j)-2*Told(i,j)+Told(i-1,j))/(dr^2)) + ((cot(y)/(R^2))*((Told(i,j+1)-Told(i,j-1))/(2*dy))) + (1/(R^2))*(Told(i,j+1)-2*Told(i,j)+ Told(i,j-1))/(dy^2));
Tnew(i,j) = ((Qcd_in(i,j)-Qcd_out(i,j))/(mass(i,j)*cp(i,j)))*dt + Told(i,j);
end %end for loop
end % end for loop
%Temperature for at center line nodes
for i=2:1:(nodes-1)
for j=1
Qcd_line(i,j)=((k(i,j)+ k(i+1,j))/2)*A(i,j)*(Told(i+1,j)-Told(i,j))/dr;
Qcd_lineout(i,j)=((k(i,j)+ k(i-1,j))/2)*A(i-1,j)*(Told(i,j)-Told(i-1,j))/dr;
Tnew(i,j)= ((Qcd_line(i,j)-Qcd_lineout(i,j))/(mass(i,j)*cp(i,j)))*dt + Told(i,j);
end
end
% Temperature at bottom point (center) of the hemisphere solid
for i=1
for j=1:1:(nodes-1)
Qcd_center(i,j)=(((k(i,j)+k(i+1,j))/2)*A(i,j)*(Told(i+1,j)-Tair)/dr);
Tnew(i,j)= ((Qcd_center(i,j))/(mass(i,j)*cp(i,j)))*dt + Told(i,j);
end
end
% Temperature at all bottom points of the hemisphere
Tnew(:,nodes)=-22;
Told=Tnew;
t=t+dt;
Tnew temperatures values are getting bigger exponentially after program is run, and then becoming NaN. It supposed to show me cooling and freezing temperature profile of solid until it reaches to Tair temperature. I could not figure out the reasons why it is changing like that.
I would like to hear your suggestions for BCs implementation to this program, or how should i change them according to this conditions. Thanks in advance !!

Your code is too long to read and understand completely, but it looks like you are using a simple forward Euler scheme, is that correct? If so, try to reduce the time-step dt, maybe by a lot, since this method can become numerically unstable if dt is too big. This might slow down the speed of the computation (again by a lot), but that is the price you pay for such a simple algorithm. There are alternatives methods that do not suffer from instability, but they are much harder to implement, since you need to solve a system of equations.
I did some thermal simulations using this simple scheme a long time ago. I found that the stability criteria was dt < (dx)^2 * c_p * rho / (6 * k), which should be valid for a simulation on a 3D cartesian grid, where dx is the spatial step, c_p is the specific heat, rho the density and k the thermal conductivity of the material. I don't know how to convert this to your case with spherical coordinates. The thing I learned then was to choose small time-steps, but more importantly as large dx as possible: when you reduce dx by a factor 2, you also need to reduce dt by a factor 4 to keep things stable. At the same time, for a 3D problem, the number of elements will increase by a factor 8. So the total simulation time scales with 1 / (dx)^5!!!

Related

Given a set of points with x, y and z coordinates whose bounds are 0 to 1 (inclusive), determine if they're all uniformly distributed (or close to)

I'm trying to determine whether a set of points are uniformly distributed in a 1 x 1 x 1 cube. Each point comes with an x, y, and z coordinate that corresponds to their location in the cube.
A trivial way that I can think of is to flatten the set of points into 2 graphs and check how normally distributed both are however I do not know whether that's a correct way of doing so.
Anyone else has any idea?
I would compute point density map and then check for anomalies in it:
definitions
let assume we have N points to test. If the points are uniformly distributed then they should form "uniform grid" of mmm points:
m * m * m = N
m = N^(1/3)
To account for disturbances from uniform grid and asses statistics you need to divide your cube to grid of cubes where each cube will hold several points (so statistical properties could be computed) let assume k>=5 points per grid cube so:
cubes = m/k
create a 3D array of counters
simply we need integer counter per each grid cube so:
int map[cubes][cubes][cubes];
fill it with zeroes.
process all points p(x,y,z) and update map[][][]
Simply loop through all of your points, and compute grid cube position they belong to and update their counter by incrementing it.
map[x*(cubes-1)][y*(cubes-1)][z*(cubes-1)]++;
compute average count of the map[][][]
simple average like this will do:
avg=0;
for (xx=0;xx<cubes;xx++)
for (yy=0;yy<cubes;yy++)
for (zz=0;zz<cubes;zz++)
avg+=map[xx][yy][zz];
avg/=cubes*cubes*cubes;
now just compute abs distance to this average
d=0;
for (xx=0;xx<cubes;xx++)
for (yy=0;yy<cubes;yy++)
for (zz=0;zz<cubes;zz++)
d+=fabs(map[xx][yy][zz]-avg);
d/=cubes*cubes*cubes;
the d will hold a metric telling how far are the points from uniform density. Where 0 means uniform distribution. So just threshold it ... the d is also depending on the number of points and my intuition tells me d>=k means totally not uniform so if you want to make it more robust you can do something like this (the threshold might need tweaking):
d/=k;
if (d<0.25) uniform;
else nonuniform;
As you can see all this is O(N) time so it should be fast enough for you. If it isn't you can evaluate every 10th point by skipping points however that can be done only if the order of points is random. If not you would need to pick N/10 random points instead. The 10 might be any constant but you need to take in mind you still need enough points to process so the statistic results are representing your set so I would not go below 250 points (but that depends on what exactly you need)
Here few of my answers using density map technique:
Finding holes in 2d point sets?
Location of highest density on a sphere

draw sines at higher freq

I'm trying to plot some sine waves (code example in plain js here).
When freq is "low" (freq = 10 hz in that case), the plot is quite nice:
The problem is when I increase the freq (try to set var freq = 50 for example):
lots of ripples, it becomes distorted and not so good as plot. If I increment it more, even worse (var freq = 8030 for example is terrible).
When I see those kind of graph on pro systems, they are displayed just fine.
How would you improve it? FFT, splines, whatever? Which is the right approch?
I don't really need accurancy (i.e. for waveform analysis or whatever), just plot it nicely (as in Desmos https://www.desmos.com/calculator/eodkjlywjh, for example).
Like Matt said, the problem here is the same as in plotting a discrete-time signal shows amplitude modulation: "At higher frequencies, when the peak falls between two samples, the sampled points can be a lot lower than the peak." This causes the peaks of the waveform to vary, making a kind of ripple visual effect at the top and bottom of the plot.
Increasing the sampling resolution helps. If you can't increase sampling resolution by much, adjust the sampling resolution to the closest integer multiple of the sine's frequency and set the sampling phase so that it samples the waveform peaks. For instance for a 100 Hz wave cos(2 π (100*x + 0.3)), you could sample at 400 Hz at x[n] = n/400 - 0.003. That way there are four samples per period, with x[0], x[4], x[8], ... sampling exactly on the peaks.
Another thought: plotting with a thicker line can help to smooth over some defects. It looks like the desmos example that you linked is using something like a 3-pixel line width.

Why do I get two frequency spikes from a simple sin function via FFT in R?

I learned about fourier transformation in mathematics classes and thought I had understood them. Now, I am trying to play around with R (statistical language) and interpret the results of a discrete FFT in practice. This is what I have done:
x = seq(0,1,by=0.1)
y = sin(2*pi*(x))
calcenergy <- function(x) Im(x) * Im(x) + Re(x) * Re(x)
fy <- fft(y)
plot(x, calcenergy(fy))
and get this plot:
If I understand this right, this represents the 'half' of the energy density spectrum. As the transformation is symmetric, I could just mirror all values to the negative values of x to get the full spectrum.
However, what I dont understand is, why I am getting two spikes? There is only a single sinus frequency in here. Is this an aliasing effect?
Also, I have no clue how to get the frequencies out of this plot. Lets assume the units of the sinus function were seconds, is the peak at 1.0 in the density spectrum 1Hz then?
Again: I understand the theory behind FFT; the practical application is the problem :).
Thanks for any help!
For a purely real input signal of N points you get a complex output of N points with complex conjugate symmetry about N/2. You can ignore the output points above N/2, since they provide no useful additional information for a real input signal, but if you do plot them you will see the aforementioned symmetry, and for a single sine wave you will see peaks at bins n and N - n. (Note: you can think of the upper N/2 bins as representing negative frequencies.) In summary, for a real input signal of N points, you get N/2 useful complex output bins from the FFT, which represent frequencies from DC (0 Hz) to Nyquist (Fs / 2).
To get frequencies from the result of an FFT you need to know the sample rate of the data that was input to the FFT and the length of the FFT. The center frequency of each bin is the bin index times the sample rate divided by the length of the FFT. Thus you will get frequencies from DC (0 Hz) to Fs/2 at the halfway bin.
The second half of the FFT results are just complex conjugates of the first for real data inputs. The reason is that the imaginary portions of complex conjugates cancel, which is required to represent a summed result with zero imaginary content, e.g. strictly real.

Zip Code Radius Search

I'm wondering if it's possible to find all points by longitude and latitude within X radius of one point?
So, if I provide a latitude/longitude of -76.0000, 38.0000, is it possible to simply find all the possible coordinates within (for example) a 10 mile radius of that?
I know that there's a way to calculate the distance between two points, which is why I'm not clear as to whether this is possible. Because, it seems like you need to know the center coordinates (-76 and 38 in this case) as well as the coordinates of every other point in order to determine whether it falls within the specified radius. Is that right?
#David's strategy is correct, his implementation is seriously flawed. I suggest that before you perform the calculations you transform your lat,long pair to UTM coordinates and work in distance, not angular, measurements. If you are not familiar with Universal Transverse Mercator, hit Google or Wikipedia.
I reckon that your point (-76,38) is at UTM 37C 472995 (Easting) 1564346 (Northing). So you want to do your calculations of distance from that point. You'll find it easier, working with UTM, to work in metres, so your distance is (if you are using statute miles of 5280 feet) 16040 metres.
Incidentally, (-76,38) is well outside the contintental US -- does the US Post Office define zip codes for Antarctica ?
If you accept that the Earth is a perfect sphere, you can obtain the spatial coordinates of a point by
x = R.cos(Lat).cos(Long)
y = R.cos(Lat).sin(Long)
z = R.sin(Lat)
Now, take two points and compute the angle they form with the center of the Earth (using a dot product):
cos(Phi) = (x'.x" + y'.y" + z'.z") / R²
(the value of R gets simplified).
In your case, the angular distance, Phi, equals 2Pi.D/R. (R=6 378.1 km).
A point P" is inside the ground distance (D) of P' when the dot product is larger than cos(Phi).
CAUTION: all angles must be in radians.
Depending on the precision, the data set of points within a certain distance may be extremely large or even infinite (impossible). In a given area of a circle with a positive radius you will have infinitely many points. Thus, it is trivial to determine if a point falls within a circle, however to enumerate over all the points is impossible.
If you do set a fixed precision (such as a single digit), you can loop over all possible latitude and longitude combinations and perform the distance test.
Kevin is correct. There is no reason to calculate every possible coordinate-pair in the radius.
If you start at the centerpoint pC = Point(-76.0000, 38.0000) and are testing to find out if arbitrary point pA = Point(Ax, Ay) is within a 10 mile radius... use the Pythagorean theorem:
xDist = abs( pCx - Ax )
yDist = abs ( pCy - Ay )
r^2 = (xDist)^2 + (yDist)^2
A reasonable approximation is to only query the points where
pAx >= (-76.0000 - 10.0000) && pAx <= (-76.0000 + 10.0000)
pAy >= ( 38.0000 - 10.0000) && pAy <= ( 38.0000 + 10.0000)
then perform the more intensive calculation above.

Collision Detection between Accelerating Spheres

I am writing a physics engine/simulator which incorporates 3D space flight, planetary/stellar gravitation, ship thrust and relativistic effects. So far, it is going very well, however, one thing that I need help with is the math of the collision detection algorithm.
The iterative simulation of movement that I am using is basically as follows:
(Note: 3D Vectors are ALL CAPS.)
For each obj
obj.ACC = Sum(all acceleration influences)
obj.POS = obj.POS + (obj.VEL * dT) + (obj.ACC * dT^2)/2 (*EQ.2*)
obj.VEL = obj.VEL + (obj.ACC * dT)
Next
Where:
obj.ACC is the acceleration vector of the object
obj.POS is the position or location vector of the object
obj.VEL is the velocity vector of the object
obj.Radius is the radius (scalar) of the object
dT is the time delta or increment
What I basically need to do is to find some efficient formula that derives from (EQ.2) above for two objects (obj1, obj2) and tell if they ever collide, and if so, at what time. I need the exact time both so that I can determine if it is in this particular time increment (because acceleration will be different at different time increments) and also so that I can locate the exact position (which I know how to do, given the time)
For this engine, I am modelling all objects as spheres, all this formula/algorithm needs to do is to figure out at what points:
(obj1.POS - obj2.POS).Distance = (obj1.Radius + obj2.Radius)
where .Distance is a positive scalar value. (You can also square both sides if this is easier, to avoid the square root function implicit in the .Distance calculation).
(yes, I am aware of many, many other collision detection questions, however, their solutions all seem to be very particular to their engine and assumptions, and none appear to match my conditions: 3D, spheres, and acceleration applied within the simulation increments. Let me know if I am wrong.)
Some Clarifications:
1) It is not sufficient for me to check for Intersection of the two spheres before and after the time increment. In many cases their velocities and position changes will far exceed their radii.
2) RE: efficiency, I do not need help (at this point anyway) with respect to determine likely candidates for collisions, I think that I have that covered.
Another clarification, which seems to be coming up a lot:
3) My equation (EQ.2) of incremental movement is a quadratic equation that applies both Velocity and Acceleration:
obj.POS = obj.POS + (obj.VEL * dT) + (obj.ACC * dT^2)/2
In the physics engines that I have seen, (and certainly every game engine that I ever heard of) only linear equations of incremental movement that apply only Velocity:
obj.POS = obj.POS + (obj.VEL * dT)
This is why I cannot use the commonly published solutions for collision detection found on StackOverflow, on Wikipedia and all over the Web, such as finding the intersection/closest approach of two line segments. My simulation deals with variable accelerations that are fundamental to the results, so what I need is the intersection/closest approach of two parabolic segments.
On the webpage AShelley referred to, the Closest Point of Approach method is developed for the case of two objects moving at constant velocity. However, I believe the same vector-calculus method can be used to derive a result in the case of two objects both moving with constant non-zero acceleration (quadratic time dependence).
In this case, the time derivative of the distance-squared function is 3rd order (cubic) instead of 1st order. Therefore there will be 3 solutions to the Time of Closest Approach, which is not surprising since the path of both objects is curved so multiple intersections are possible. For this application, you would probably want to use the earliest value of t which is within the interval defined by the current simulation step (if such a time exists).
I worked out the derivative equation which should give the times of closest approach:
0 = |D_ACC|^2 * t^3 + 3 * dot(D_ACC, D_VEL) * t^2 + 2 * [ |D_VEL|^2 + dot(D_POS, D_ACC) ] * t + 2 * dot(D_POS, D_VEL)
where:
D_ACC = ob1.ACC-obj2.ACC
D_VEL = ob1.VEL-obj2.VEL (before update)
D_POS = ob1.POS-obj2.POS (also before update)
and dot(A, B) = A.x*B.x + A.y*B.y + A.z*B.z
(Note that the square of the magnitude |A|^2 can be computed using dot(A, A))
To solve this for t, you'll probably need to use formulas like the ones found on Wikipedia.
Of course, this will only give you the moment of closest approach. You will need to test the distance at this moment (using something like Eq. 2). If it is greater than (obj1.Radius + obj2.Radius), it can be disregarded (i.e. no collision). However, if the distance is less, that means the spheres collide before this moment. You could then use an iterative search to test the distance at earlier times. It might also be possible to come up with another (even more complicated) derivation which takes the size into account, or possible to find some other analytic solution, without resorting to iterative solving.
Edit: because of the higher order, some of the solutions to the equation are actually moments of farthest separation. I believe in all cases either 1 of the 3 solutions or 2 of the 3 solutions will be a time of farthest separation. You can test analytically whether you're at a min or a max by evaluating the second derivative with respect to time (at the values of t which you found by setting the first derivative to zero):
D''(t) = 3 * |D_ACC|^2 * t^2 + 6 * dot(D_ACC, D_VEL) * t + 2 * [ |D_VEL|^2 + dot(D_POS, D_ACC) ]
If the second derivative evaluates to a positive number, then you know the distance is at a minimum, not a maximum, for the given time t.
Draw a line between the start location and end location of each sphere. If the resulting line segments intersect the spheres definitely collided at some point and some clever math can find at what time the collision occurred. Also make sure to check if the minimum distance between the segments (if they don't intersect) is ever less than 2*radius. This will also indicate a collision.
From there you can backstep your delta time to happen exactly at collision so you can correctly calculate the forces.
Have you considered using a physics library which already does this work? Many libraries use far more advanced and more stable (better integrators) systems for solving the systems of equations you're working with. Bullet Physics comes to mind.
op asked for time of collision. A slightly different approach will compute it exactly...
Remember that the position projection equation is:
NEW_POS=POS+VEL*t+(ACC*t^2)/2
If we replace POS with D_POS=POS_A-POS_B, VEL with D_VEL=VEL_A-VEL_B, and ACC=ACC_A-ACC_B for objects A and B we get:
$D_NEW_POS=D_POS+D_VEL*t+(D_ACC*t^2)/2
This is the formula for vectored distance between the objects. In order to get the squared scalar distance between them, we can take the square of this equation, which after expansion looks like:
distsq(t) = D_POS^2+2*dot(D_POS,D_VEL)*t + (dot(D_POS, D_ACC)+D_VEL^2)*t^2 + dot(D_VEL,D_ACC)*t^3 + D_ACC^2*t^4/4
In order to find the time where collision occurs, we can set the equation equal to the square of the sum of radii and solve for t:
0 = D_POS^2-(r_A+r_B)^2 + 2*dot(D_POS,D_VEL)*t + (dot(D_POS, D_ACC)+D_VEL^2)*t^2 + dot(D_VEL,D_ACC)*t^3 + D_ACC^2*t^4/4
Now, we can solve for the equation using the quartic formula.
The quartic formula will yield 4 roots, but we are only interested in real roots. If there is a double real root, then the two objects touch edges at exactly one point in time. If there are two real roots, then the objects continuously overlap between root 1 and root 2 (i.e. root 1 is the time when collision starts and root 2 is the time when collision stops). Four real roots means that the objects collide twice, continuously between root pairs 1,2 and 3,4.
In R, I used polyroot() to solve as follows:
# initial positions
POS_A=matrix(c(0,0),2,1)
POS_B=matrix(c(2,0),2,1)
# initial velocities
VEL_A=matrix(c(sqrt(2)/2,sqrt(2)/2),2,1)
VEL_B=matrix(c(-sqrt(2)/2,sqrt(2)/2),2,1)
# acceleration
ACC_A=matrix(c(sqrt(2)/2,sqrt(2)/2),2,1)
ACC_B=matrix(c(0,0),2,1)
# radii
r_A=.25
r_B=.25
# deltas
D_POS=POS_B-POS_A
D_VEL=VEL_B-VEL_A
D_ACC=ACC_B-ACC_A
# quartic coefficients
z=c(t(D_POS)%*%D_POS-r*r, 2*t(D_POS)%*%D_VEL, t(D_VEL)%*%D_VEL+t(D_POS)%*%D_ACC, t(D_ACC)%*%D_VEL, .25*(t(D_ACC)%*%D_ACC))
# get roots
roots=polyroot(z)
# In this case there are only two real roots...
root1=as.numeric(roots[1])
root2=as.numeric(roots[2])
# trajectory over time
pos=function(p,v,a,t){
T=t(matrix(t,length(t),2))
return(t(matrix(p,2,length(t))+matrix(v,2,length(t))*T+.5*matrix(a,2,length(t))*T*T))
}
# plot A in red and B in blue
t=seq(0,2,by=.1) # from 0 to 2 seconds.
a1=pos(POS_A,VEL_A,ACC_A,t)
a2=pos(POS_B,VEL_B,ACC_B,t)
plot(a1,type='o',col='red')
lines(a2,type='o',col='blue')
# points of a circle with center 'p' and radius 'r'
circle=function(p,r,s=36){
e=matrix(0,s+1,2)
for(i in 1:s){
e[i,1]=cos(2*pi*(1/s)*i)*r+p[1]
e[i,2]=sin(2*pi*(1/s)*i)*r+p[2]
}
e[s+1,]=e[1,]
return(e)
}
# plot circles with radius r_A and r_B at time of collision start in black
lines(circle(pos(POS_A,VEL_A,ACC_A,root1),r_A))
lines(circle(pos(POS_B,VEL_B,ACC_B,root1),r_B))
# plot circles with radius r_A and r_B at time of collision stop in gray
lines(circle(pos(POS_A,VEL_A,ACC_A,root2),r_A),col='gray')
lines(circle(pos(POS_B,VEL_B,ACC_B,root2),r_B),col='gray')
Object A follows the red trajectory from the lower left to the upper right. Object B follows the blue trajectory from the lower right to the upper left. The two objects collide continuously between time 0.9194381 and time 1.167549. The two black circles just touch, showing the beginning of overlap - and overlap continues in time until the objects reach the location of the gray circles.
Seems like you want the Closest Point of Approach (CPA). If it is less than the sum of the radiuses, you have a collision. There is example code in the link. You can calculate each frame with the current velocity, and check if the CPA time is less than your tick size. You could even cache the cpa time, and only update when acceleration was applied to either item.

Resources