What does "v" stand for in ndarray (de)serialize? - multidimensional-array

I am trying to understand the "v" in ndarray (de)serialize:
use ndarray::prelude::*;
pub fn example() {
let ar2 = arr2(&[[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.],[0., 0., 0., 0., 0.],[0., 0., 0., 0., 0.]]);
let s = serde_json::to_string(&ar2).unwrap();
dbg!(s);
let ar1 = arr1(&[[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.],[0., 0., 0., 0., 0.],[0., 0., 0., 0., 0.]]);
let s = serde_json::to_string(&ar1).unwrap();
dbg!(s);
let anatoly = String::from("{\"v\":1,\"dim\":[3,3],\"data\":[1.0,1.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0]}");
let a = serde_json::from_str::<Array2<f64>>(&anatoly).unwrap();
dbg!(a);
}
Looking at:
https://docs.rs/ndarray/latest/src/ndarray/array_serde.rs.html#91-100
It refers to some kind of ARRAY_FORMAT_VERSION
What is this "version"?
Is it always "1" (for latest versions of the lib)?

Yes, that field represents the version of the serialization format. At the moment that version might always be 1, but in the future the format could change. And if/when it does change, this field can be used to determine how to deserialize the data regardless of which version it came from. It also gives the option for a single version of ndarray to choose between multiple formats for a single type depending on the situation.
As an example, lets say they wanted to add an extra safety check where we embed the type of array values inside the serialized data to make sure it is deserialized to the same type.
{"v":2,"type":"f64","dim":[3,3],"data":[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]}
However after this update is introduced, what happens to data that was serialized by the previous version? Using the version field in the data we can then tell which approach to use when reading the data and still maintain compatibility with the previous format.
Of course, that is a purely hypothetical example. As #IvanC pointed out in a comment, they likely want to implement a packed data format and the format version is a way of future proofing for when that time comes.

Related

How to define edge_func parameter in dgl.nn.pytorch.conv.NNConv?

I have a graph with nodes, edges, and edge features. I want to make a model with SAGEConv but SAGEConv only look at node features. So I thought of making node feature by aggregating edge features of each node and I found out about NNConv, where they're able to do that. The problem is I'm not sure how to use NNConv, especially what do I do with edge_func that is required as a parameter for NNConv?
Let say I have 3 nodes and 4 edges, each edge has 2 features, and I want the each nodes to have 2 features as well. I defined edge_func to be
edge_func = nn.Sequential(nn.Linear(2, 2), nn.ReLU(), nn.Linear(2, 4))
and edge_feature to be
tensor([[2., 3.],
[3., 1.],
[0., 4.],
[1., 1.]])
and build a DGLGraph with that. However after computing the layer
NNConv(in_feats=2, out_feats=2, edge_nn, aggregator_type='max')
in the forward function
feature = self.layer(self.graph, feature, edge_feature)
returns feature (it was originally tensor of zeros) as
tensor([[-inf, -inf],
[0., 0.],
[0., 0.]], grad_fn=<AddBackward0>)
What am I doing wrong?

Translate 2-D PDEs and conditions into accurate Fipy codes

enter image description here
The image includes the control equation,initial and boundary condition.It descripe a heat transfer problem between a plate and fluid.
I don't know how to use fipy to encode the 2-d problem and boundary condition that include the var.
Here is my attempt.
from fipy import *
import numpy as np
#constant
Pe=2400
le_L=1/20000
L_l=20000
alphas=1
alphaf=1
a=1/Pe+le_L
b=1/Pe+L_l
Bi=0.4
c=Bi/Pe*L_l
#generate
mesh=Grid2D(dx=1,dy=1)
Ts=CellVariable(mesh=mesh,name='Ts',value=900)
Tf=CellVariable(mesh=mesh,name='Tf',value=300)
#condition
Ts.faceGrad.constrain([0.],mesh.facesLeft)
Ts.faceGrad.constrain([0.],mesh.facesRight)
Ts.faceGrad.constrain([-1.*Bi*(Tf.value-Ts.value)],mesh.facesBottom)
Ts.faceGrad.constrain([0.],mesh.facesTop)
Tf.constrain(300,mesh.facesLeft)
Tf.grad.constrain(0,mesh.facesRight)
a=CellVariable(mesh=mesh,rank=1)
a[:]=1
#eq
eq1=TransientTerm(var=Ts)==DiffusionTerm(coeff=[[a,b]],var=Ts)
eq2=TransientTerm(var=Tf)==DiffusionTerm(coeff=[[a,0]],var=Tf)-
ExponentialConvectionTerm(a,var=Tf)+ImplicitSourceTerm(c,var=Tf)-
ImplicitSourceTerm(c,var=Ts)
eq=eq1&eq2
#solve
dt=0.1
steps=100
viewer=Viewer(vars=(Ts,Tf),datamax=1000,datamin=0)
for i in range(steps):
eq.solve(dt=dt)
viewer.plot()
I find it failed.And I don't know where goes wrong.I would welcome any help; many thanks!
BTW,the final image i wish to get is likeenter image description here
......Many thx!
[edited to fix general boundary conditions]
The following runs and seems to give results of the nature you're looking for:
from fipy import *
import numpy as np
#constant
Pe=2400.
le_L=1./20000.
L_l=20000.
alphasx=alphasy=1.
alphaf=1.
Bi=0.4
c=Bi/Pe*L_l
Dsxx = alphasx
Dsyy = alphasy * L_l**2
Ds = 1./Pe * le_L * (1./alphaf) * Variable([[alphasx, 0.],
[0., alphasy * L_l**2]])
Df = Variable([[1./Pe * le_L, 0],
[0., 0.]])
#generate
mesh=Grid2D(Lx=1.,Ly=1.,nx=100, ny=100)
Ts=CellVariable(mesh=mesh,name='Ts',value=900.)
Tf=CellVariable(mesh=mesh,name='Tf',value=900.)
#condition
bottom_mask = (mesh.facesBottom * mesh.faceNormals).divergence
dPR = mesh._cellDistances[mesh.facesBottom.value][0]
Af = mesh._faceAreas[mesh.facesBottom.value][0]
bottom_coeff = bottom_mask * Ds[1,1] * Af / (1 + dPR)
Tf.constrain(300,mesh.facesLeft)
#eq
eq1=(TransientTerm(var=Ts)==DiffusionTerm(coeff=Ds,var=Ts)
+ ImplicitSourceTerm(coeff=bottom_coeff * -Bi, var=Tf)
- ImplicitSourceTerm(coeff=bottom_coeff * -Bi, var=Ts))
eq2=(TransientTerm(var=Tf)==DiffusionTerm(coeff=Df,var=Tf)
-ExponentialConvectionTerm(coeff=[[1.], [0]],var=Tf)
+ImplicitSourceTerm(c,var=Tf)
-ImplicitSourceTerm(c,var=Ts))
eq=eq1&eq2
#solve
dt=0.01
steps=100
viewer=Viewer(vars=(Ts,Tf),datamax=1000,datamin=0)
for i in range(steps):
eq.solve(dt=dt)
viewer.plot()
I changed a number of coefficients to agree with the mathematics you provided.
I fixed the diffusion coefficients to have the shape expected by FiPy for anistropic diffusion
I changed lots of ints to floats because ints don't work well in FiPy
I provided a domain to solve over (your mesh only had a single cell in it, making spatial variation impossible)
I decreased the time step
I introduced the best way we know how to deal with general boundary conditions. It's ugly, but I think it's right.
You will probably also want to introduce sweeping to account for the nonlinear dependency between the equations and the boundary conditions.

How to find point along arc in 3D given center, start & end points + radius + center angle?

If I have three points, let's say:
start: (14.5, 10.1, 2.8)
end: (-12.3, 6.4, 7.7)
center: (0, 0, 0)
And the following additional information that has been determined:
Radius: 15
Center Angle: 109 degrees
Arc (from Pt A - Pt B): 29
How can I approach finding points along the arc between the starting and ending points?
UPDATE: Vectors are marked with a °.
The normal n° of the plane p in which the circle (or the arc) lies is
n° = cross product of start°, end°
p contains all points X° satisfying the equation
dot product of n° and X° = 0
// ^^^ This is only for completeness, you needn't calculate it.
Now we want two orthogonal unit vectors X°, Y° lying in p:
X° = start° / norm(start°)
Y° = cross_prod(n°, start°) / norm(cross_prod(n°, start°))
(where norm(X°) is sqrt(x[1]^2 + x[2]^2 + x[3]^2),
and by dividing a vector V° by a scalar S I mean dividing each vector component by S:
V° / S := (V°[1]/S, V°[2]/S, V°[3]/S)
)
In 2d coordinates, we could draw a circle with the parametrization
t -> 15*(cos(t), sin(t)) = 15*cos(t) * X° + 15*sin(t) * Y°
where X° = (1, 0) and Y° = (0, 1).
Now in 3d in plane p, having two orthogonal unit vectors X° and Y°, we can analogically do
t -> 15*cos(t) * X° + 15*sin(t) * Y°
where X°, Y° as defined before, and t goes from 0 to 109 degrees.
For t=0, we get point start°. For t=109, we should get end°. If that goes wrong, change Y° to -Y°. For t between 0 and 109, we get the arc between start° and end°.
Depending on your sin/cos implementation, you need to specify the angles in radians, not degrees.

Orthographic projection with origin at screen bottom left

I'm using the python OpenGL bindings, and trying to only use modern opengl calls. I have a VBO with verticies, and I am trying to render with an orthographic projection matrix passed to the vertex shader.
At present I am calculating my projection matrix with the following values:
from numpy import array
w = float(width)
h = float(height)
n = 0.5
f = 3.0
matrix = array([
[2/w, 0, 0, 0],
[ 0, 2/h, 0, 0],
[ 0, 0, 1/(f-n), -n/(f-n)],
[ 0, 0, 0, 1],
], 'f')
#later
projectionUniform = glGetUniformLocation(shader, 'projectionMatrix')
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
That code I got from here:
Formula for a orthogonal projection matrix?
This seems to work fine, but I would like my Origin to be in the bottom left corner of the screen. Is this a function I can apply over my matrix so everything "just works", or must I translate every object by w/2 h/2 manually?
side note: Will the coordinates match pixel positions with this working correctly?
Because I'm using modern OpenGL techniques, I don't think I should be using gluOrtho2d or GL_PROJECTION calls.
glUniformMatrix4fv(projectionUniform, 1, GL_FALSE, matrix)
Your matrix is stored in row-major ordering. So you should pass GL_TRUE, or you should change your matrix to column-major.
I'm not completely familiar with projections yet, as I've only started OpenGL programming recently, but your current matrix does not translate any points. The diagonal will apply scaling, but the right most column will apply translation. The link Dirk gave gives you a projection matrix that will make your origin (0,0 is what you want, yes?) the bottom-left corner of your screen.
A matrix I've used to do this (each row is actually a column to OpenGL):
OrthoMat = mat4(
vec4(2.0/(screenDim.s - left), 0.0, 0.0, 0.0),
vec4(0.0, 2.0/(screenDim.t - bottom), 0.0, 0.0),
vec4(0.0, 0.0, -1 * (2.0/(zFar - zNear)), 0.0),
vec4(-1.0 * (screenDim.s + left)/(screenDim.s - left), -1.0 * (screenDim.t + bottom)/(screenDim.t - bottom), -1.0 * (zFar + zNear)/(zFar - zNear), 1.0)
);
The screenDim math is effectively the width or height, since left and bottom are both set to 0. zFar and zNear are 1 and -1, respectively (since it's 2D, they're not extremely important).
This matrix takes values in pixels, and the vertex positions need to be in pixels as well. The point (0, 32) will always be at the same position when you resize the screen too.
Hope this helps.
Edit #1: To be clear, the left/bottom/zfar/znear values I stated are the ones I chose to make them. You can change these how you see fit.
You can use a more general projection matrix which additionally uses left,right positions.
See Wikipedia for the definition.

Intersection of lines in maple

How do I find the intersection of two lines in maple when plotted as follows:
a:=line([1,-1,-1],[0,0,1]):
b:=line([1,1,1],[0,-1,0]):
I attempted to use the intersection command but it returned this:
intersection(CURVES([[1., -1., -1.], [0., 0., 1.]]), CURVES([[1., 1., 1.], [0., -1., 0.]]))
Thanks very much for any help
line's first argument is the name of the line you are symbolically defining.
Instead of using
a:=line([1,-1,-1],[0,0,1]):
b:=line([1,1,1],[0,-1,0]):
Try the following instead:
with(geom3d):
point(p1,[1,-1,-1]):
point(p2,[0,0,1]):
point(p3,[1,1,1]):
point(p4,[0,-1,0]):
line(l1,[p1, p2]):
line(l2,[p3, p4]):
intersection(P,l1,l2):
coordinates(P)
See the help on intersection and line for more detail.

Resources