Difference between the two quaternions - math

Solved
I'm making a 3D portal system in my engine (like Portal game). Each of the portals has its own orientation saved in a quaternion. To render the virtual scene in one of the portals I need to calculate the difference between the two quaternions, and the result use to rotate the virtual scene.
When creating the first portal on the left wall, and second one on the right wall, the rotation from one to another will take place in only one axis, but for example when the first portal will be created on the floor, and the second one on the right wall, the rotation from one to another could be in two axis, and that's the problem, because the rotation goes wrong.
I think the problem exists because the orientation for example X axis and Z axis are stored together in one quaternion and I need it separately to manualy multiply X * Z (or Z * X), but how to do it with only one quaternion, (the difference quaternion)? Or is there other way to correct rotate the scene?
EDIT:
Here on this picture are two portals P1 and P2, the arrows show how are they rotated. As I am looking into P1 I will see what sees P2. To find the rotation which I need to rotate the main scene to be like the virtual scene in this picture I'm doing following:
Getting difference from quaternion P2 to quaternion P1
Rotating result by 180 degrees in Y axis (portal's UP)
Using the result to rotate the virtual scene
This method above works only when the difference takes place in only one axis. When one portal will be on the floor, or on te ceiling, this will not work because the difference quaternion is build in more than one axis. As suggested I tried to multiply P1's quaternion to P2's quaternion, and inversely but this isn't working.
EDIT 2:
To find the difference from P2 to P1 I'm doing following:
Quat q1 = P1->getOrientation();
Quat q2 = P2->getOrientation();
Quat diff = Quat::diff(q2, q1); // q2 * diff = q1 //
Here's the Quat::diff function:
GE::Quat GE::Quat::diff(const Quat &a, const Quat &b)
{
Quat inv = a;
inv.inverse();
return inv * b;
}
Inverse:
void GE::Quat::inverse()
{
Quat q = (*this);
q.conjugate();
(*this) = q / Quat::dot((*this), (*this));
}
Conjugate:
void GE::Quat::conjugate()
{
Quat q;
q.x = -this->x;
q.y = -this->y;
q.z = -this->z;
q.w = this->w;
(*this) = q;
}
Dot product:
float GE::Quat::dot(const Quat &q1, const Quat &q2)
{
return q1.x*q2.x + q1.y*q2.y + q1.z*q2.z + q1.w*q2.w;
}
Operator*:
const GE::Quat GE::Quat::operator* ( const Quat &q) const
{
Quat qu;
qu.x = this->w*q.x + this->x*q.w + this->y*q.z - this->z*q.y;
qu.y = this->w*q.y + this->y*q.w + this->z*q.x - this->x*q.z;
qu.z = this->w*q.z + this->z*q.w + this->x*q.y - this->y*q.x;
qu.w = this->w*q.w - this->x*q.x - this->y*q.y - this->z*q.z;
return qu;
}
Operator/:
const GE::Quat GE::Quat::operator/ (float s) const
{
Quat q = (*this);
return Quat(q.x / s, q.y / s, q.z / s, q.w / s);
}
All this stuff works, because I have tested it with GLM library

If you want to find a quaternion diff such that diff * q1 == q2, then you need to use the multiplicative inverse:
diff * q1 = q2 ---> diff = q2 * inverse(q1)
where: inverse(q1) = conjugate(q1) / abs(q1)
and: conjugate( quaternion(re, i, j, k) ) = quaternion(re, -i, -j, -k)
If your quaternions are rotation quaternions, they should all be unit quaternions. This makes finding the inverse easy: since abs(q1) = 1, your inverse(q1) = conjugate(q1) can be found by just negating the i, j, and k components.
However, for the kind of scene-based geometric configuration you describe, you probably don't actually want to do the above, because you also need to compute the translation correctly.
The most straightforward way to do everything correctly is to convert your quaternions into 4x4 rotation matrices, and multiply them in the appropriate order with 4x4 translation matrices, as described in most introductory computer graphics texts.
It is certainly possible to compose Euclidean transformations by hand, keeping your rotations in quaternion form while applying the quaternions incrementally to a separate translation vector. However, this method tends to be technically obscure and prone to coding error: there are good reasons why the 4x4 matrix form is conventional, and one of the big ones is that it appears to be easier to get it right that way.

I solved my problem. As it turned out I don't need any difference between two rotations. Just multiply one rotation by rotation in 180 degrees, and then multiply by inverse of second rotation that way (using matrices):
Matrix m1 = p1->getOrientation().toMatrix();
Matrix m2 = p2->getOrientation().toMatrix();
Matrix model = m1 * Matrix::rotation(180, Vector3(0,1,0)) * Matrix::inverse(m2);
and translation calculating this way:
Vector3 position = -p2->getPosition();
position = model * position + p1->getPosition();
model = Matrix::translation(position) * model;

No, you have to multiply two quaternions together to get the final quaternion you desire.
Let's say that your first rotation is q1 and the second is q2. You want to apply them in that order.
The resulting quaternion will be q2 * q1, which will represent your composite rotation (recall that quaternions use left-hand multiplication, so q2 is being applied to q1 by multiplying from the left)
Reference
For a brief tutorial on computing a single quaternion, refer to my previous stack overflow answer
Edit:
To clarify, you'd face a similar problem with rotation matrices and Euler angles. You define your transformations about X, Y, and Z, and then multiply them together to get the resulting transformation matrix (wiki). You have the same issue here. Rotation matrices and Quaternions are equivalent in most ways for representing rotations. Quaternions are preferred mostly because they're a bit easier to represent (and easier for addressing gimbal lock)

Quaternions work the following way: the local frame of reference is represented as the imaginary quaternion directions i,j,k. For instance, for an observer standing in the portal door 1 and looking in the direction of the arrow, direction i may represent the direction of the arrow, j is up and k=ij points to the right of the observer. In global coordinates represented by the quaternion q1, the axes in 3D coordinates are
q1*(i,j,k)*q1^-1=q1*(i,j,k)*q1',
where q' is the conjugate, and for unit quaternions, the conjugate is the inverse.
Now the task is to find a unit quaternion q so that directions q*(i,j,k)*q' in local frame 1 expressed in global coordinates coincide with the rotated directions of frame 2 in global coordinates. From the sketch that means forwards becomes backwards and left becomes right, that is
q1*q*(i,j,k)*q'*q1'=q2*(-i,j,-k)*q2'
=q2*j*(i,j,k)*j'*q2'
which is readily achieved by equating
q1*q=q2*j or q=q1'*q2*j.
But details may be different, mainly that another axis may represent the direction "up" instead of j.
If the global system of the sketch is from the bottom, so that global-i points forward in the vertical direction, global-j up and global-k to the right, then local1-(i,j,k) is global-(-i,j,-k), giving
q1=j.
local2-(i,j,k) is global-(-k,j,i) which can be realized by
q2=sqrt(0.5)*(1+j),
since
(1+j)*i*(1-j)=i*(1-j)^2=-2*i*j=-2*k and
(1+j)*k*(1-j)=(1+j)^2*k= 2*j*k= 2*i
Comparing this to the actual values in your implementation will indicate how the assignment of axes and quaternion directions has to be changed.

Check https://www.emis.de/proceedings/Varna/vol1/GEOM09.pdf
Imagine to get dQ from Q1 to Q2, I'll explain why dQ = Q1*·Q2, instead of Q2·Q1*
This rotates the frame, instead of an object. For any vector v in R3, the rotation action of operator
L(v) = Q*·v·Q
It's not Q·v·Q*, which is object rotation action.
If you rotates Q1 and then Q1* and then Q2, you can write
(Q1·Q1*·Q2)*·v·(Q1·Q1*·Q2) = (Q1*·Q2)*·Q1*·v·Q1·(Q1*·Q2) = dQ*·Q1*·v·Q1·dQ
So dQ = Q1*·Q2

Related

How to calculate the angles of the projection in 3d for an object to step at given point?

I need to calculate the angles to through the ball in that direction for a given speed and the point where it should land after thrown.
The horizontal angle is easy(We know both start and step points).How to calculate the vertical angle of projection.There is gravy applying on object.
Time of travel will be usual as bowling time(time between ball release and occurring step) as per video.
Is there a way directly in unity3d?
Watch this video for 8 seconds for clear understating of this question.
According to the Wikipedia page Trajectory of a projectile, the "Angle of reach" (The angle you want to know) is calculated as follows:
θ = 1/2 * arcsin(gd/v²)
In this formula, g is the gravitational constant 9.81, d is the distance you want the projectile to travel, and v is the velocity at which the object is thrown.
Code to calculate this could look something like this:
float ThrowAngle(Vector3 destination, float velocity)
{
const float g = 9.81f;
float distance = Vector3.Distance(transform.position, destination);
//assuming you want degrees, otherwise just drop the Rad2Deg.
return Mathf.Rad2Deg * (0.5f * Asin((g*distance)/Mathf.Pow(velocity, 2f)));
}
This will give you the angle assuming no air resistance etc. exist in your game.
If your destination and your "throwing point" are not at the same height, you may want to set both to y=0 first, otherwise, errors may occur.
EDIT:
Considering that your launch point is higher up than the destination, this formula from the same page should work:
θ = arctan(v² (+/-) √(v^4-g(gx² + 2yv²))/gx)
Here, x is the range, or distance, and y is the altitude (relative to the launch point).
Code:
float ThrowAngle(Vector3 start, Vector3 destination, float v)
{
const float g = 9.81f;
float xzd = Mathf.Sqrt(Mathf.Pow(destination.x - start.x, 2) + Mathf.Pow(destination.z - start.z, 2));
float yd = destination.y - start.y;
//assuming you want degrees, otherwise just drop the Rad2Deg. Split into two lines for better readability.
float sqrt = (Mathf.Pow(v,4) - g * (g*Mathf.Pow(xzd,2) + 2*yd*Mathf.Pow(v,2))/g*xzd);
//you could also implement a solution which uses both values in some way, but I left that out for simplicity.
return Mathf.Atan(Mathf.Pow(v, 2) + sqrt);
}

Calculate point on a circle in 3D space

i am scratching my head for some time now how to do this.
I have two defined vectors in 3d space. Say vector X at (0,0,0) and vector Y at (3,3,3). I will get a random point on a line between those two vectors. And around this point i want to form a circle ( some amount of points ) perpendicular to the line between the X and Y at given radius.
Hopefuly its clear what i am looking for. I have looked through many similar questions, but just cant figure it out based on those. Thanks for any help.
Edit:
(Couldnt put everything into comment so adding it here)
#WillyWonka
Hi, thanks for your reply, i had some moderate success with implementing your solution, but has some trouble with it. It works most of the time, except for specific scenarios when Y point would be at positions like (20,20,20). If it sits directly on any axis its fine.
But as soon as it gets into diagonal the distance between perpendicular point and origin gets smaller for some reason and at very specific diagonal positions it kinda flips the perpendicular points.
IMAGE
Here is the code for you to look at
public Vector3 X = new Vector3(0,0,0);
public Vector3 Y = new Vector3(0,0,20);
Vector3 A;
Vector3 B;
List<Vector3> points = new List<Vector3>();
void FindPerpendicular(Vector3 x, Vector3 y)
{
Vector3 direction = (x-y);
Vector3 normalized = (x-y).normalized;
float dotProduct1 = Vector3.Dot(normalized, Vector3.left);
float dotProduct2 = Vector3.Dot(normalized, Vector3.forward);
float dotProduct3 = Vector3.Dot(normalized, Vector3.up);
Vector3 dotVector = ((1.0f - Mathf.Abs(dotProduct1)) * Vector3.right) +
((1.0f - Mathf.Abs(dotProduct2)) * Vector3.forward) +
((1.0f - Mathf.Abs(dotProduct3)) * Vector3.up);
A = Vector3.Cross(normalized, dotVector.normalized);
B = Vector3.Cross(A, normalized);
}
What you want to do first is to find the two orthogonal basis vectors of the plane perpendicular to the line XY, passing through the point you choose.
You first need to find a vector which is perpendicular to XY. To do this:
Normalize the vector XY first
Dot XY with the X-axis
If this is very small (for numerical stability let's say < 0.1) then it must be parallel/anti-parallel to the X-axis. We choose the Y axis.
If not then we choose the X-axis
For whichever chosen axis, cross it with XY to get one of the basis vectors; cross this with XY again to get the second vector.
Normalize them (not strictly necessary but very useful)
You now have two basis vectors to calculate your circle coordinates, call them A and B. Call the point you chose P.
Then any point on the circle can be parametrically calculated by
Q(r, t) = P + r * (A * cos(t) + B * sin(t))
where t is an angle (between 0 and 2π), and r is the circle's radius.

Perspective projection - help me duplicate blender's default scene

I'm currently attempting to teach myself perspective projection, my reference is the wikipedia page on the subject here: http://en.wikipedia.org/wiki/3D_projection#cite_note-3
My understanding is that you take your object to be project it and rotate and translate it in to "camera space", such that your camera is now assumed to be origin looking directly down the z axis. (This matrix op from the wikipedia page: http://upload.wikimedia.org/math/5/1/c/51c6a530c7bdd83ed129f7c3f0ff6637.png)
You then project your new points in to 2D space using this equation: http://upload.wikimedia.org/math/6/8/c/68cb8ee3a483cc4e7ee6553ce58b18ac.png
The first step I can do flawlessly. Granted I wrote my own matrix library to do it, but I verified it was spitting out the right answer by typing the results in to blender and moving the camera to 0,0,0 and checking it renders the same as the default scene.
However, the projection part is where it all goes wrong.
From what I can see, I ought to be taking the field of view, which by default in blender is 28.842 degrees, and using it to calculate the value wikipedia calls ez, by doing
ez = 1 / tan(fov / 2);
which is approximately 3.88 in this case.
I should then for every point be doing:
x = (ez / dz) * dx;
y = (ez / dz) * dy;
to get x and y coordinates in the range of -1 to 1 which I can then scale appropriately for the screen width.
However, when I do that my projected image is mirrored in the x axis and in any case doesn't match with the cube blender renders. What am I doing wrong, and what should I be doing to get the right projected coordinates?
I'm aware that you can do this whole thing with one matrix op, but for the moment I'm just trying to understand the equations, so please just stick to the question asked.
From what you say in your question it's unclear whether you're having trouble with the Projection matrix or the Model matrix.
Like I said in my comments, you can Google glFrustum and gluLookAt to see exactly what these matrices look like. If you're familiar with matrix math (and it looks like you are), you will then understand how the coordinates are transformed into a 2D perspective.
Here is some sample OpenGL code to make the View and Projection Matrices and Model matrix for a simple 30 degree rotation about the Y axis so you can see how the components that go into these matrices are calculated.
// The Projection Matrix
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
near = -camera.viewPos.z - shapeSize * 0.5;
if (near < 0.00001)
near = 0.00001;
far = -camera.viewPos.z + shapeSize * 0.5;
if (far < 1.0)
far = 1.0;
radians = 0.0174532925 * camera.aperture / 2; // half aperture degrees to radians
wd2 = near * tan(radians);
ratio = camera.viewWidth / (float) camera.viewHeight;
if (ratio >= 1.0) {
left = -ratio * wd2;
right = ratio * wd2;
top = wd2;
bottom = -wd2;
} else {
left = -wd2;
right = wd2;
top = wd2 / ratio;
bottom = -wd2 / ratio;
}
glFrustum (left, right, bottom, top, near, far);
// The View Matrix
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
gluLookAt (camera.viewPos.x, camera.viewPos.y, camera.viewPos.z,
camera.viewPos.x + camera.viewDir.x,
camera.viewPos.y + camera.viewDir.y,
camera.viewPos.z + camera.viewDir.z,
camera.viewUp.x, camera.viewUp.y ,camera.viewUp.z);
// The Model Matrix
glRotatef (30.0, 0.0, 1.0, 0.0);
You'll see that glRotate actually does a quaternion rotation (angle of rotation plus a vector about which to do the rotation).
You could also do separate rotations about the X, Y and Z axis.
There's lot's of information on the web about how to form 4X4 matrices for rotations, translations and scales. If you do each of these separately, you'll need to multiply them to get the Model matrix. e.g:
If you have 4X4 matrices rotateX, rotateY, rotateZ, translate, scale, you might form your Model matrix by:
Model = scale * rotateX * rotateZ * rotateY * translate.
Order matters when you form the Model matrix. You'll get different results if you do the multiplication in a different order.
If your object is at the origin, I doubt you want to also put the camera at the origin.

Rotate quaternion on all 3 axis from axis angle in GLM

I use quaternions for rotations in OpenGL engine.Currently , in order to create rotation matrix for x ,y and z rotations I create a quaternion per axis rotation.Then I multiply these to get the final quaternion:
void RotateTo3(const float xr ,const float yr ,const float zr){
quat qRotX=angleAxis(xr, X_AXIS);
quat qRotY=angleAxis(yr, Y_AXIS);
quat qRotZ=angleAxis(zr, Z_AXIS);
quat resQuat=normalize(qRotX * qRotY * qRotZ);
resQuat=normalize(resQuat);
_rotMatrix= mat4_cast(resQuat);
}
Now it's all good but I want to create only one quaternion from all 3 axis angles and skip the final multiplication.One of the quat constructors has params for euler angles vector which goes like this:
quat resQuat(vec3(yr,xr,zr))
So if I try this the final rotation is wrong.(Also tried quat(vec3(xr,yr,zr)) ) .Isn't there a way in GLM to fill the final quaternion from all 3 axis in one instance ?
Now , one more thing:
As Nicol Bolas suggested , I could use glm::eulerAngleYXZ() to fill a rotation matrix right away as by his opinion it is pointless to do the intermediate quaternion step.. But what I found is that the function doesn't work properly , at least for me .For example :
This :
mat4 ex= eulerAngleX(radians(xr));
mat4 ey= eulerAngleY(radians(yr));
mat4 ez= eulerAngleZ(radians(zr));
rotMatrix= ex * ey * ez;
Doesn't return the same as this :
rotMatrix= eulerAngleYXZ(radians(yr),radians(xr),radians(zr));
And from my comparisons to the correct rotation state ,the first way gives the correct rotations while the second wrong .
Contrary to popular belief, quaternions are not magical "solve the Gimbal lock" devices, such that any uses of quaternions make Euler angles somehow not Euler angles.
Your RotateTo3 function takes 3 Euler angles and converts them into a rotation matrix. It doesn't matter how you perform this process; whether you use 3 matrices, 3 quaternions or glm::eulerAngleYXZ. The result will still be a matrix composed from 3 axial rotations. It will have all of the properties and failings of Euler angles. Because it is Euler angles.
Using quaternions as intermediaries here is pointless. It gains you nothing; you may as well just use matrices built from successive glm::rotate calls.
If you want to do orientation without Gimbal lock or the other Euler angle problems, then you need to stop representing your orientation as Euler angles.
In answer to the question you actually asked, you can use glm::eulerAngleYXZ to compute
Do you mean something like this:
quat formQuaternion(double x, double y, double z, double angle){
quat out;
//x, y, and z form a normalized vector which is now the axis of rotation.
out.w = cosf( fAngle/2)
out.x = x * sinf( fAngle/2 )
out.y = y * sinf( fAngle/2 )
out.z = z * sinf( fAngle/2 )
return out;
}
Sorry I don't actually know the quat class you are using, but it should still have some way to set the 4 dimensions. Source: Quaternion tutorial
eulerAngleYXZ gives one possible set of euler angles which, if recombined in the order indicated by the api name, will yield the same orientation as the given quaternion.
It's not a wrong result - it's one of several correct results.
Use a quaternion to store your orientation internally - to rotate it, multiply your orientation quat by another quat representing the amount to rotate by, which can be built from angle/axis to achieve what you want.

Perturb vector by some angle

I have a unit vector in 3D space whose direction I wish to perturb by some angle within the range 0 to theta, with the position of the vector remaining the same. What is a way I can accomplish this?
Thanks.
EDIT: After thinking about the way I posed the question, it seems to be a bit too general. I'll attempt to make it more specific: Assume that the vector originates from the surface of an object (i.e. sphere, circle, box, line, cylinder, cone). If there are different methods to finding the new direction for each of those objects, then providing help for the sphere one is fine.
EDIT 2: I was going to type this in a comment but it was too much.
So I have orig_vector, which I wish to perturb in some direction between 0 and theta. The theta can be thought of as forming a cone around my vector (with theta being the angle between the center and one side of the cone) and I wish to generate a new vector within that cone. I can generate a point lying on the plane that is tangent to my vector and thus creating a unit vector in the direction of the point, call it rand_vector. At this time, I orig_vector and trand_vector are two unit vectors perpendicular to each other.
I generate my first angle, angle1 between 0 and 2pi and I rotate rand_vector around orig_vector by angle1, forming rand_vector2. I looked up a resource online and it said that the second angle, angle2 should be between 0 and sin(theta) (where theta is the original "cone" angle). Then I rotate rand_vector2 by acos(angle2) around the vector defined by the cross product between rand_vector2 and orig_vector.
When I do this, I don't obtain the desired results. That is, when theta=0, I still get perturbed vectors, and I expect to get orig_vector. If anyone can explain the reason for the angles and why they are the way they are, I would greatly appreciate it.
EDIT 3: This is the final edit, I promise =). So I fixed my bug and everything that I described above works (it was an implementation bug, not a theory bug). However, my question about the angles (i.e. why is angle2 = sin(theta)*rand() and why is perturbed_vector = rand_vector2.Rotate(rand_vector2.Cross(orig_vector), acos(angle2)). Thanks so much!
Here's the algorithm that I've used for this kind of problem before. It was described in Ray Tracing News.
1) Make a third vector perpendicular to the other two to build an orthogonal basis:
cross_vector = unit( cross( orig_vector, rand_vector ) )
2) Pick two uniform random numbers in [0,1]:
s = rand( 0, 1 )
r = rand( 0, 1 )
3) Let h be the cosine of the cone's angle:
h = cos( theta )
4) Modify uniform sampling on a sphere to pick a random vector in the cone around +Z:
phi = 2 * pi * s
z = h + ( 1 - h ) * r
sinT = sqrt( 1 - z * z )
x = cos( phi ) * sinT
y = sin( phi ) * sinT
5) Change of basis to reorient it around the original angle:
perturbed = rand_vector * x + cross_vector * y + orig_vector * z
If you have another vector to represent an axis of rotation, there are libraries that will take the axis and the angle and give you a rotation matrix, which can then be multiplied by your starting vector to get the result you want.
However, the axis of rotation should be at right angles to your starting vector, to get the amount of rotation you expect. If the axis of rotation does not lie in the plane perpendicular to your vector, the result will be somewhat different than theta.
That being said, if you already have a vector at right angles to the one you want to perturb, and you're not picky about the direction of the perturbation, you can just as easily take a linear combination of your starting vector with the perpendicular one, adjust for magnitude as needed.
I.e., if P and Q are vectors having identical magnitude, and are perpendicular, and you want to rotate P in the direction of Q, then the vector R given by R = [Pcos(theta)+Qsin(theta)] will satisfy the constraints you've given. If P and Q have differing magnitude, then there will be some scaling involved.
You may be interested in 3D-coordinate transformations to change your vector angle.
I don't know how many directions you want to change your angle in, but transforming your Cartesian coordinates to spherical coordinates should allow you to make your angle change as you like.
Actually, it is very easy to do that. All you have to do is multiply your vector by the correct rotation matrix. The resulting vector will be your rotated vector. Now, how do you obtain such rotation matrix? That depends on the 3d framework/engine you are using. Any 3d framework must provide functions for obtaining rotation matrices, normally as static methods of the Matrix class.
Good luck.
Like said in other comments you can rotate your vector using a rotation matrix.
The rotation matrix has two angles you rotate your vector around. You can pick them with a random number generator, but just picking two from a flat generator is not correct. To ensure that your rotation vector is generated flat, you have to pick one random angle φ from a flat generator and the other one from a generator flat in cosθ ;this ensures that your solid angle element dcos(θ)dφ is defined correctly (φ and θ defined as usual for spherical coordinates).
Example: picking a random direction with no restriction on range, random() generates flat in [0,1]
angle1 = acos(random())
angle2 = 2*pi*random()
My code in unity - tested and working:
/*
* this is used to perturb given vector 'direction' by changing it by angle not more than 'angle' vector from
* base direction. Used to provide errors for player playing algorithms
*
*/
Vector3 perturbDirection( Vector3 direction, float angle ) {
// division by zero protection
if( Mathf.Approximately( direction.z, 0f )) {
direction.z = 0.0001f;
}
// 1 get some orthogonal vector to direction ( solve direction and orthogonal dot product = 0, assume x = 1, y = 1, then z = as below ))
Vector3 orthogonal = new Vector3( 1f, 1f, - ( direction.x + direction.y ) / direction.z );
// 2 get random vector from circle on flat orthogonal to direction vector. get full range to assume all cone space randomization (-180, 180 )
float orthoAngle = UnityEngine.Random.Range( -180f, 180f );
Quaternion rotateTowardsDirection = Quaternion.AngleAxis( orthoAngle, direction );
Vector3 randomOrtho = rotateTowardsDirection * orthogonal;
// 3 rotate direction towards random orthogonal vector by vector from our available range
float perturbAngle = UnityEngine.Random.Range( 0f, angle ); // range from (0, angle), full cone cover guarantees previous (-180,180) range
Quaternion rotateDirection = Quaternion.AngleAxis( perturbAngle, randomOrtho );
Vector3 perturbedDirection = rotateDirection * direction;
return perturbedDirection;
}

Resources