Waterwave normals calculation slightly off - math

I implemented "The Sum of Sines Approximation" in this article:
https://developer.nvidia.com/gpugems/gpugems/part-i-natural-effects/chapter-1-effective-water-simulation-physical-models
My geometry is working fine, but the normals are somewhat wrong. They x- and z-values are flipped, and I wonder what i did wrong in the implementation.
The following equations are used:
The generation of the normals looks like this in the shader:
vec3 generateWaveSineSumNormal(sineParams _params[sineCount])
{
vec2 pos = vec2(aPos.x, aPos.z);
vec3 normal = vec3(0.0f, 1.0f, 0.0f);
for(int i=0; i<sineCount; i++)
{
sinParams curParams = _params[i];
normal.x += sineExponent * curParams.direction.x * curParams.frequency * curParams.amplitude *
pow((sin(dot(curParams.direction, pos) * curParams.frequency + curTime * curParams.speed)+1)/2, sineExponent-1)
* cos(dot(curParams.direction, pos) * curParams.frequency + curTime * curParams.speed);
normal.z += sineExponent * curParams.direction.y * curParams.frequency * curParams.amplitude *
pow((sin(dot(curParams.direction, pos) * curParams.frequency + curTime * curParams.speed)+1)/2, sineExponent-1)
* cos(dot(curParams.direction, pos) * curParams.frequency + curTime * curParams.speed);
}
return vec3(-normal.x, normal.y, -normal.z);
}
When the x- and z-values are flipped like this, the normals are fine. I'm just wondering what I did wrong implementing it, since I just can't find it.

Your normal.xz calculates the gradient of the heightmap correctly. The gradient points uphill, i.e. where the height values increase. The normal, when projected on the horizontal plane, would point backwards.
This can be derived mathematically through the use of cross-product of two tangents, which is what that article does in the "Normals and Tangents" section. In particular the relevant result is the formula for the normal in "Equation 6b", which includes the negative signs:

Related

Visual Bug with Quaternion LookAt Rotation

I have the following function:
Quaternion Quaternion::LookAt(Vector3f direction, Vector3f forward, Vector3f up) {
Quaternion rot1 = RotationBetweenVectors(forward, direction);
Vector3f right = Vector3f::CrossProduct(direction, up);
up = Vector3f::CrossProduct(right, direction);
Vector3f realUp(0, 1, 0);
Vector3f newUp = rot1 * realUp;
Quaternion rot2 = RotationBetweenVectors(newUp, up);
Quaternion res = rot2 * rot1;
return Quaternion(res.x, res.y, res.z, res.w);
}
And the other function:
Quaternion Quaternion::RotationBetweenVectors(Vector3f forward, Vector3f direction) {
forward = Vector3f::Normalize(forward);
direction = Vector3f::Normalize(direction);
float cosTheta = Vector3f::DotProduct(forward, direction);
Vector3f axis;
if (cosTheta < -1 + 0.001f) {
// special case when vectors in opposite directions:
// there is no "ideal" rotation axis
// So guess one; any will do as long as it's perpendicular to start
axis = Vector3f::CrossProduct(Vector3f(0.0f, 0.0f, 1.0f), forward);
if (axis.Length() * axis.Length() < 0.01)
axis = Vector3f::CrossProduct(Vector3f(1.0f, 0.0f, 0.0f), forward);
axis = Vector3f::Normalize(axis);
return Quaternion(axis.x, axis.y, axis.z, DegreesToRadians(180));
}
axis = Vector3f::CrossProduct(forward, direction);
float s = sqrt((1 + cosTheta) * 2);
float invs = 1 / s;
return Quaternion(
axis.x * invs,
axis.y * invs,
axis.z * invs,
s * 0.5f
);
}
These functions work very well, but a visual bug appears where the object is stretched out when the diretion and forward vectors point in opposite directions, as it is said in the comment in the second function.
I followed this: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-17-quaternions/ tutorial (under the section "How do I find the rotation between 2 vectors?")
And it suggests the code I have implemented.
As said, a visual bug appears, so is there any better way of doing it?

How can I extract the angle from a function that determines if a circle is colliding with a line?

So I have the following code that works fine
I've taken it from https://codereview.stackexchange.com/questions/192477/circle-line-segment-collision
I am now trying to figure out where along this function I can determine the angle in which the circle is hitting the line. Instead of returning true I'd like it to return the angle, since I didn't write the function myself going through it to try and figure this out myself has been a bit of a struggle. Wondering if someone excellent at math can help me figure this out at first glance. Thank you!
// Function to check intercept of line seg and circle
// A,B end points of line segment
// C center of circle
// radius of circle
// returns true if touching or crossing else false
function doesLineInterceptCircle(A, B, C, radius) {
var dist;
const v1x = B.x - A.x;
const v1y = B.y - A.y;
const v2x = C.x - A.x;
const v2y = C.y - A.y;
// get the unit distance along the line of the closest point to
// circle center
const u = (v2x * v1x + v2y * v1y) / (v1y * v1y + v1x * v1x);
// if the point is on the line segment get the distance squared
// from that point to the circle center
if(u >= 0 && u <= 1){
dist = (A.x + v1x * u - C.x) ** 2 + (A.y + v1y * u - C.y) ** 2;
} else {
// if closest point not on the line segment
// use the unit distance to determine which end is closest
// and get dist square to circle
dist = u < 0 ?
(A.x - C.x) ** 2 + (A.y - C.y) ** 2 :
(B.x - C.x) ** 2 + (B.y - C.y) ** 2;
}
return dist < radius * radius;
}

openGL (Qt) bind properly + Two rotation at the same time

I'm trying to get some experience in openGL, but now I'm facing "1.5" problems ;).
The first problem / question is how can I get a rotation in two directions "simultaneously"?
I want to draw a coordinate system which is movable on the x- and y-axis. But I'm only able to move on the x-axis or y-axis. I can't figure it out how to do both at the same time.
My other half problem is not really a problem but as you can see I'm binding my shaders all the time new when I move my mouse. Is there a better way how it could been done?
void GLWidget::mouseMoveEvent(QMouseEvent *event)
{
differencePostition.setX(event->x() - lastPosition.x());
differencePostition.setY(event->y() - lastPosition.y());
shaderProgram.removeAllShaders();
shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, "../Vector/yRotation.vert");
shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, "../Vector/CoordinateSystemLines.frag");
shaderProgram.link();
shaderProgram.bind();
shaderProgram.setAttributeValue("angle", differencePostition.x());
//shaderProgram.release();
//shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, "../Vector/xRotation.vert");
//shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, "../Vector/CoordinateSystemLines.frag");
//shaderProgram.link();
//shaderProgram.bind();
//shaderProgram.setAttributeValue("angle", differencePostition.y());
updateGL();
}
void GLWidget::mousePressEvent(QMouseEvent *event)
{
lastPosition = event->posF();
}
xRotation.vert
#version 330
in float angle;
const float PI = 3.14159265358979323846264;
void main(void)
{
float rad_angle = angle * PI / 180.0;
vec4 oldPosition = gl_Vertex;
vec4 newPosition = oldPosition;
newPosition.y = oldPosition.y * cos(rad_angle) - oldPosition.z * sin(rad_angle);
newPosition.z = oldPosition.y * sin(rad_angle) + oldPosition.z * cos(rad_angle);
gl_Position = gl_ModelViewProjectionMatrix * newPosition;
}
yRotation.vert
#version 330
in float angle;
const float PI = 3.14159265358979323846264;
void main(void)
{
float rad_angle = angle * PI / 180.0;
vec4 oldPosition = gl_Vertex;
vec4 newPosition = oldPosition;
newPosition.x = oldPosition.x * cos(rad_angle) + oldPosition.z * sin(rad_angle);
newPosition.z = oldPosition.z * cos(rad_angle) - oldPosition.x * sin(rad_angle);
gl_Position = gl_ModelViewProjectionMatrix * newPosition;
}
Rotation in more than one direction at the same time requires a combination of matrices ( commonly called a general rotation matrix )
There are several sites that show how this matrix is generated if you are more interested.
As to your second problem, the shaders are usually initialized in the init section.
Example: http://doc-snapshot.qt-project.org/5.0/qtopengl/cube-mainwidget-cpp.html
You only need to call shaderProgram.bind(); every time before you want to draw an object with your shader. Loading and linking is usually only done once in the initialization of your programm. Only call shaderProgram.setAttributeValue your mouseMoveEvent method.
EDIT
A quick way to solve your rotation problem is to write a shader that does both rotations one after the other. Add a second in variable and set both using the setAttributeValue method.
#version 330
in float angleX;
in float angleY;
const float PI = 3.14159265358979323846264;
void main(void)
{
float rad_angle_x = angleX * PI / 180.0;
vec4 oldPosition = gl_Vertex;
vec4 newPositionX = oldPosition;
newPositionX.y = oldPosition.y * cos(rad_angle_x) - oldPosition.z * sin(rad_angle_x);
newPositionX.z = oldPosition.y * sin(rad_angle_x) + oldPosition.z * cos(rad_angle_x);
float rad_angle_y = angleY * PI / 180.0;
vec4 newPositionXY = newPositionX;
newPositionXY.x = newPositionX.x * cos(rad_angle_y) + newPositionX.z * sin(rad_angle_y);
newPositionXY.z = newPositionX.z * cos(rad_angle_y) - newPositionX.x * sin(rad_angle_y);
gl_Position = gl_ModelViewProjectionMatrix * newPositionXY;
}
This way you don't need to know matrix multiplications.

correct glsl affine texture mapping

i'm trying to code correct 2D affine texture mapping in GLSL.
Explanation:
...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for.
I'd like to simply "stretch" texture inside quadrilateral, something like this:
but composed from two triangles. Any advice (GLSL) please?
This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.
GLSL:
varying vec2 shiftedPosition, width_height;
#ifdef VERTEX
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
width_height = gl_MultiTexCoord1.xy;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif
C#:
// Zero out the left and bottom edges,
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
Vector2.zero,
new Vector2(0, vertices[1].y - vertices[0].y),
new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;
var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0
In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.
float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;
//These are all uniforms for a given quad. Calculate on CPU.
Ax = (x3 - x0) - (x2 - x1);
Bx = (x0 - x1);
Cx = (x2 - x1);
Dx = x1;
Ay = (y3 - y0) - (y2 - y1);
By = (y0 - y1);
Cy = (y2 - y1);
Dy = y1;
float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);
A = (Ay*Cx)-(Ax*Cy);
//These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader
B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
C = (Bx * y) + ByDx_minus_BxDy - (By * x);
//These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);
Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels.
Check out this link.
https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be
I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):
//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
int x = ax-bx;
int y = ay-by;
return sqrtf((float)(x*x + y*y));
}
void gfx_quad(gfx_t *dst //destination texture, we are rendering into
,gfx_t *src //source texture
,int *quad // quadrilateral vertices
)
{
int *v = quad; //quad vertices
float z = 20.0;
float top = distance(v[0],v[1],v[2],v[3]); //top
float bot = distance(v[4],v[5],v[6],v[7]); //bottom
float lft = distance(v[0],v[1],v[4],v[5]); //left
float rgt = distance(v[2],v[3],v[6],v[7]); //right
// By default all vertices lie on the screen plane
float az = 1.0;
float bz = 1.0;
float cz = 1.0;
float dz = 1.0;
// Move Z from screen, if based on distance ratios.
if (top<bot) {
az *= top/bot;
bz *= top/bot;
} else {
cz *= bot/top;
dz *= bot/top;
}
if (lft<rgt) {
az *= lft/rgt;
cz *= lft/rgt;
} else {
bz *= rgt/lft;
dz *= rgt/lft;
}
// draw our quad as two textured triangles
gfx_textured(dst, src
, v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
, 0.0,0.0, 1.0,0.0, 0.0,1.0);
gfx_textured(dst, src
, v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
, 1.0,0.0, 0.0,1.0, 1.0,1.0);
}
I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers
I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.
I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:
The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.
The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);
// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);
// Compute distance to each point.
for (const auto &pt : points) {
auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
std::pow(pt.y - intersection(1), 2));
distances.push_back(distance);
}
// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
{1.0f, 1.0f},
{1.0f, 0.0f},
{0.0f, 0.0f},
{0.0f, 1.0f}
};
std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
float u_i = texture_coords_unnormalized[i].x;
float v_i = texture_coords_unnormalized[i].y;
float d_i = distances.at(i);
float d_i_2 = distances.at((i + 2) % 4);
float scale = (d_i + d_i_2) / d_i_2;
texture_coords.push_back(u_i*scale);
texture_coords.push_back(v_i*scale);
texture_coords.push_back(scale);
}
Pass the texture coordinates to your shader (use vec3). Then:
gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
thanks for answers, but after experimenting i found a solution.
two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.
Numbers and shader:
tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]
d1 = length of top edge = 2
d2 = length of bottom edge = 1
tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]
only right triangles are rendered using this glsl shader (on left is fixed pipeline):
void main()
{
gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}
so.. only U is perspective and V is linear.

Quaternion - Conversion between YawPitchRoll and EulerAngles produces incorrect result only with pitch of Pi

I have spent some time implementing a couple of algorithms for converting between EulerAngles and Quaternions.
I am testing that the quaternion values are the same with this code
Quaternion orientation0 = Prototype1.Mathematics.ToolBox.QuaternionFromYawPitchRoll(0, 0, 0);
Vector3 rotation = orientation0.ToEulerAngles();
Quaternion orientation1 = Prototype1.Mathematics.ToolBox.QuaternionFromYawPitchRoll(rotation.Y, rotation.X, rotation.Z);
Console.WriteLine(orientation0);
Console.WriteLine(orientation1);
I have used a previous method discussed here and have since implemented another method described here
public static Quaternion QuaternionFromYawPitchRoll(float yaw, float pitch, float roll)
{
float rollOver2 = roll * 0.5f;
float sinRollOver2 = (float)Math.Sin((double)rollOver2);
float cosRollOver2 = (float)Math.Cos((double)rollOver2);
float pitchOver2 = pitch * 0.5f;
float sinPitchOver2 = (float)Math.Sin((double)pitchOver2);
float cosPitchOver2 = (float)Math.Cos((double)pitchOver2);
float yawOver2 = yaw * 0.5f;
float sinYawOver2 = (float)Math.Sin((double)yawOver2);
float cosYawOver2 = (float)Math.Cos((double)yawOver2);
// X = PI is giving incorrect result (pitch)
// Heading = Yaw
// Attitude = Pitch
// Bank = Roll
Quaternion result;
//result.X = cosYawOver2 * cosPitchOver2 * cosRollOver2 + sinYawOver2 * sinPitchOver2 * sinRollOver2;
//result.Y = cosYawOver2 * cosPitchOver2 * sinRollOver2 - sinYawOver2 * sinPitchOver2 * cosRollOver2;
//result.Z = cosYawOver2 * sinPitchOver2 * cosRollOver2 + sinYawOver2 * cosPitchOver2 * sinRollOver2;
//result.W = sinYawOver2 * cosPitchOver2 * cosRollOver2 - cosYawOver2 * sinPitchOver2 * sinRollOver2;
result.W = cosYawOver2 * cosPitchOver2 * cosRollOver2 - sinYawOver2 * sinPitchOver2 * sinRollOver2;
result.X = sinYawOver2 * sinPitchOver2 * cosRollOver2 + cosYawOver2 * cosPitchOver2 * sinRollOver2;
result.Y = sinYawOver2 * cosPitchOver2 * cosRollOver2 + cosYawOver2 * sinPitchOver2 * sinRollOver2;
result.Z = cosYawOver2 * sinPitchOver2 * cosRollOver2 - sinYawOver2 * cosPitchOver2 * sinRollOver2;
return result;
}
public static Vector3 ToEulerAngles(this Quaternion q)
{
// Store the Euler angles in radians
Vector3 pitchYawRoll = new Vector3();
double sqx = q.X * q.X;
double sqy = q.Y * q.Y;
double sqz = q.Z * q.Z;
double sqw = q.W * q.W;
// If quaternion is normalised the unit is one, otherwise it is the correction factor
double unit = sqx + sqy + sqz + sqw;
double test = q.X * q.Y + q.Z * q.W;
//double test = q.X * q.Z - q.W * q.Y;
if (test > 0.4999f * unit) // 0.4999f OR 0.5f - EPSILON
{
// Singularity at north pole
pitchYawRoll.Y = 2f * (float)Math.Atan2(q.X, q.W); // Yaw
pitchYawRoll.X = PIOVER2; // Pitch
pitchYawRoll.Z = 0f; // Roll
return pitchYawRoll;
}
else if (test < -0.4999f * unit) // -0.4999f OR -0.5f + EPSILON
{
// Singularity at south pole
pitchYawRoll.Y = -2f * (float)Math.Atan2(q.X, q.W); // Yaw
pitchYawRoll.X = -PIOVER2; // Pitch
pitchYawRoll.Z = 0f; // Roll
return pitchYawRoll;
}
else
{
pitchYawRoll.Y = (float)Math.Atan2(2f * q.Y * q.W - 2f * q.X * q.Z, sqx - sqy - sqz + sqw); // Yaw
pitchYawRoll.X = (float)Math.Asin(2f * test / unit); // Pitch
pitchYawRoll.Z = (float)Math.Atan2(2f * q.X * q.W - 2f * q.Y * q.Z, -sqx + sqy - sqz + sqw); // Roll
//pitchYawRoll.Y = (float)Math.Atan2(2f * q.X * q.W + 2f * q.Y * q.Z, 1 - 2f * (sqz + sqw)); // Yaw
//pitchYawRoll.X = (float)Math.Asin(2f * (q.X * q.Z - q.W * q.Y)); // Pitch
//pitchYawRoll.Z = (float)Math.Atan2(2f * q.X * q.Y + 2f * q.Z * q.W, 1 - 2f * (sqy + sqz)); // Roll
}
return pitchYawRoll;
}
All my implementations work except for when the pitch value is ±PI.
Quaternion orientation0 = Prototype1.Mathematics.ToolBox.QuaternionFromYawPitchRoll(0, PI, 0);
Vector3 rotation = orientation0.ToEulerAngles();
Quaternion orientation1 = Prototype1.Mathematics.ToolBox.QuaternionFromYawPitchRoll(rotation.Y, rotation.X, rotation.Z);
Console.WriteLine(orientation0);
Console.WriteLine(orientation1); // Not the same quaternion values
Why will this not work for that particular value? If it is a singularity then it is not being determined as one in the algorithm and the 'test' value will instead be very close to 0.
Rotation space wraps onto itself. Obviously if you rotate by 2PI around any axis, you end up back where you started. Likewise, if you rotate by PI around an axis, it's the same thing as rotating by -PI around the same axis. Or if you rotate by any angle around an axis, it's the same as rotating by the negation of that angle around the negation of that axis.
All of this means that your quaternion conversion algorithms have to decide what to do when dealing with redundancy. The two orientations that you provide in the comments are the same orientation: (0,0,0,1) and (0,0,0,-1) [I prefer having 'w' in alphabetical order].
You should make sure that you always normalize your quaternions or else you'll eventually get some strange drifting. Other than that, what seems to be happening is that when you rotate by PI around the 'z' axis, floating point round-off or a 'less-than' vs. 'less-than-or-equal-to' discrepancy is pushing the representation around the circle to the point that your algorithm decides to represent the angle as a rotation by -PI around the z-axis. That's the same thing.
In a similar manner, if you rotate by 2PI around any axis, your quaternion might be (-1,0,0,0). But if you rotate by zero, it will be (1,0,0,0). The Euler angle representation coming back from either of those quaternions, however, should be (0,0,0).

Resources