I was reading the the answers to the question about finding the angle between 2 vectors in 3D space. Signed angle between two 3D vectors with same origin within the same plane. The answer is shown here:
atan2((Vb x Va) . Vn, Va . Vb)
is exactly what I need, but I don't understand what the commas operator is. I know that the exes and dots are cross products and dot products respectively. I don't think the commas are inner products (same thing as dot products)? Perhaps, it is a syntax of a programming language?
The language is (I think1) Matlab, and the comma is actually an argument separator (NOT an operator2) in a method call.
1 - This is consistent with the context where you found this expression, though I suspect that the author was actually using Matlab syntax as a way of expressing a mathematical concept.
2 - According to https://au.mathworks.com/help/matlab/matlab_prog/matlab-operators-and-special-characters.html
Related
I am attempting to create a linear combination of two numbers to create their GCD. The code I have so far can find the expanded solution. I have done all of the (hard) math for it (i.e. find the GCD using Euclid's Algorithm, then work backward essentially) and it will result in something like this for example (the two starting numbers are 1215 and 960):
((960-(3*(1215-(1*960))))-(3*((1215-(1*960))-(1*(960-(3*(1215-(1*960))))))))
In my actual solution there is a space between every component (e.g. '( ( 960 - (3 * '...)
but I am trying to simplify this into the equation:
((-15*1215)+(19*960))
I feel like the best approach is to create an Expression Tree, but I don't know how to without actually just evaluating the answer.
It sounds like you are looking for a symbolic computation system. Here's one approach using Maxima (https://maxima.sourceforge.net). I'll enable stardisp to show * between terms of a product. I'll also input numbers like 960 as symbols in order to suppress arithmetic on them by writing them as \960 etc. Note that 1 and 3 are input as ordinary numbers so arithmetic is carried out on them.
(%i13) stardisp:true;
(%o13) true
(%i14) 2*3;
(%o14) 6
(%i15) \2*\3;
(%o15) 2*3
(%i16) ((\960-(3*(\1215-(1*\960))))-(3*((\1215-(1*\960))-(1*(\960-(3*(\1215-(1*\960))))))));
(%o16) 960 - 3*(1215 - 960) - 3*((- 2*960) + 3*(1215 - 960)
+ 1215)
(%i17) factor(%);
(%o17) 19*960 - 15*1215
Maybe you want to replace the numbers with symbols a, b, c, etc to get a more general solution.
There are many other symbol computation systems, a web search will find them. Good luck and have fun.
Working through the first edition of "Introduction to Functional Programming", by Bird & Wadler, which uses a theoretical lazy language with Haskell-ish syntax.
Exercise 3.2.3 asks:
Using a list comprehension, define a function for counting the number
of negative numbers in a list
Now, at this point we're still scratching the surface of lists. I would assume the intention is that only concepts that have been introduced at that point should be used, and the following have not been introduced yet:
A function for computing list length
List indexing
Pattern matching i.e. f (x:xs) = ...
Infinite lists
All the functions and operators that act on lists - with one exception - e.g. ++, head, tail, map, filter, zip, foldr, etc
What tools are available?
A maximum function that returns the maximal element of a numeric list
List comprehensions, with possibly multiple generator expressions and predicates
The notion that the output of the comprehension need not depend on the generator expression, implying the generator expression can be used for controlling the size of the generated list
Finite arithmetic sequence lists i.e. [a..b] or [a, a + step..b]
I'll admit, I'm stumped. Obviously one can extract the negative numbers from the original list fairly easily with a comprehension, but how does one then count them, with no notion of length or indexing?
The availability of the maximum function would suggest the end game is to construct a list whose maximal element is the number of negative numbers, with the final result of the function being the application of maximum to said list.
I'm either missing something blindingly obvious, or a smart trick, with a horrible feeling it may be the former. Tell me SO, how do you solve this?
My old -- and very yellowed copy of the first edition has a note attached to Exercise 3.2.3: "This question needs # (length), which appears only later". The moral of the story is to be more careful when setting exercises. I am currently finishing a third edition, which contains answers to every question.
By the way, did you answer Exercise 1.2.1 which asks for you to write down all the ways that
square (square (3 + 7)) can be reduced to normal form. It turns out that there are 547 ways!
I think you may be assuming too many restrictions - taking the length of the filtered list seems like the blindingly obvious solution to me.
An couple of alternatives but both involve using some other function that you say wasn't introduced:
sum [1 | x <- xs, x < 0]
maximum (0:[index | (index, ()) <- zip [1..] [() | x <- xs, x < 0]])
Maybe this question is better suited in the math section of the site but I guess stackoverflow is suited too. In mathematics, a vector has a position and a direction, but in programming, a vector is usually defined as:
Vector v (3, 1, 5);
Where is the direction and magnitude? For me, this is a point, not a vector... So what gives? Probably I am not getting something so if anybody can explain this to me it would be very appreciated.
If we are working in cartesian coordinates, and assume (0,0,0) to be the origin, then a point p=(3,1,5) can be written as
where i, j and k are the unit vectors in the x, y and z directions. For convenience sake, the unit vectors are dropped from programming constructs.
The magnitude of the vector is
and its direction cosines are
respectively, both of which can be done programmatically. You can also take dot products and cross-products, which I'm sure you know about. So the usage is consistent between programming and mathematics. The difference in notations is mostly because of convenience.
However as Tomas pointed out, in programming, it is also common to define a vector of strings or objects, which really have no mathematical meaning. You can consider such vectors to be a one dimensional array or a list of items that can be accessed or manipulated easily by indexing.
In mathematics, it is easy to represent a vector by a point - just say that the "base" of the vector is implied to be the origin. Thus, a mathematical point for all practical purposes is also a representation of a mathematical vector, and the vector in your example has the magnitude sqrt(3^2 + 1^2 + 5^2) = 6 and the direction (1/2, 1/6, 5/6) (a normalized vector from the origin).
However, a vector in programming usually has no geometrical use, which means you really aren't interested in things like magnitude or direction. A vector in programming is rather just an ordered list of items. Important here is that the items need not be numbers - it can be anything handled by the language in question! Thus, ("Hello", "little", "world") is also a vector in programming, although it (obviously) has no vector interpretation in the mathematical sense.
Practically speaking (!):
A vector in mathematics is only a direction without a position (actually something more general, but to stay in your terminology). In programming you often use vectors for points. You can think of your vector as the vector pointing from the origin (0,0,0) to the point (3,1,5), called the location vector of the point. Consult texts on analytical and affine geometry for more insight.
A Vector in computer science is an "one dimensional" data structure (array) (can be thought as direction) with an usually dynamic size (length/magnitude). For that reason it is called as vector. But it's an array at least.
A vector also means a set of coordinates. This is how it is used in programming. Just as a set of numbers. You might want to represent position vectors, velocity vectors, momentum vectors, force vectors with a vector object, or you may wish to represent it any way that suits you.
Many times vector quantities may be represented by 4 coordinates instead of 3 (see homogeneous coordinates in computer graphics) so a physical vector is represented by a computer vector with 4 elements. Alternatively you can store direction and magnitude separately, or encode them with 3, 4 or more coordinates.
I guess what I am getting to, is that computer languages are designed to represent physical models, but abstract data containers that the programmer use as tools for his/hers modeling.
Vector in math is an element of n-dimensional space over some field(e.g. real/complex number, functions, string). It may have infinite dimension, e.g. functional space L^2. I don't remember infite-dimensional vectors were used in programming (infinite vectors are not vectors with non-limited length, but vector with infite number of elements)
The most rigorous statement is that a mathematical vector is a first-order tensor that transforms from one coordinate system to another according to tensor transformation rules. The physical idea to keep in mind is that vectors have both magnitude and direction.
Programming vectors are data structures that need not transform according to any rules and may or may not have a notion of a coordinate system as reference. If you happen to use a vector data structure to hold numbers, they may conform to the mathematical definition. But if you have a vector of objects, it's unlikely that they have anything to do with coordinate transformations.
This is over my head, can someone explain it to me better? http://mathworld.wolfram.com/Reflection.html
I'm making a 2d breakout fighting game, so I need the ball to be able to reflect when it hits a wall, paddle, or enemy (or a enemy hits it).
all their formula's are like: x_1^'-x_0=v-2(v·n^^)n^^.
And I can't fallow that. (What does ' mean or x_0? or ^^?)
The formula for reflection is easier to understand if you think to the geometric meaning of the operation of "dot product".
The dot product between two 3d vectors is mathematically defined as
<a, b> = ax*bx + ay*by + az*bz
but it has a nice geometric interpretation
The dot product between a and b is the length
of the projection of a over b taken with
a negative sign if the two vectors are pointing in
opposite directions, multiplied by the length of b.
Something that is immediately obvious using this definition and that it's not evident if you only look at the formula is for example that the dot product of two vectors doesn't change if the coordinate system is rotated, that the dot product of two perpendicular vectors is 0 (the length of the projection is clearly zero in this case) or that the dot product of a vector by itself is the square of its length.
Something that is instead less obvious using the geometric interpretation is that the dot product is commutative, i.e. that <a, b> = <b, a> (fact that is clear considering the formula).
An important point to consider is also that if the length of b is 1 then the dot product <a, b> is simply the length of the projection of a over b (taken with the proper sign).
Given this interpretation the formula for computing the reflection over a plane is quite easy to understand:
To compute the reflected vector r, given a vector a and a plane with normal n you just need to use the formula:
r = a - 2<a, n> n
the height h in the figure is in this case just <a, n> (note that n is assumed to be of unit length) and so it should be clear that you need to move twice that height in the direction of the normal.
If you consider the proper dot product signs you should see that the formula applies also when the incident vector a and the plane normal n are facing in the same direction.
The prime (') indicates the second form of a number/point/structure. In this case, x₁' refers to the reflected form of x₁.
The subscript (0) shows various states of the same. In this case, x₀ is the point of reflection.
The caret notation (^) shows that something is a vector. In this case, n̂ is the normal vector.
Is this just about the equation formatting? Because I see nicely formatted equations, not the LaTeX-style markup appearing in your question. So step 1: try viewing the page in a different web browser and see if it looks clearer.
More substantively, I'd recommend a different kind of resource. Fundamentally, you're looking at collisions, which are normally better treated in a physics text than a math text. Any introductory physics textbook will have a chapter on collisions, which should be directly applicable to your game.
In C the atan2 function has the following signature:
double atan2( double y, double x );
Other languages do this as well. This is the only function I know of that takes its arguments in Y,X order rather than X,Y order, and it screws me up regularly because when I think coordinates, I think (X,Y).
Does anyone know why atan2's argument order convention is this way?
Because I believe it is related to arctan(y/x), so y appears on top.
Here's a nice link talking about it a bit: Angles and Directions
My assumption has always been that this is because of the trig definition, ie that
tan(theta) = opposite / adjacent
When working with the canonical angle from the origin, opposite is always Y and adjacent is always X, so:
atan2(opposite, adjacent) = theta
Ie, it was done that way so there's no ordering confusion with respect to the mathematical definition.
Suppose a rectangle triangle with its opposite side called y, adjacent side called x:
tan(angle) = y/x
arctan(tan(angle)) = arctan(y/x)
It's because in school, the mnemonic for calculating the gradient
is rise over run, or in other words dy/dx, or more briefly y/x.
And this order has snuck into the arguments of arctangent functions.
So it's a historical artefact. For me it depends on what I'm thinking
about when I use atan2. If I'm thinking about differentials, I get it right
and if I'm thinking about coordinate pairs, I get it wrong.
The order is atan2(X,Y) in excel so I think the reverse order is a programming thing. atan(Y/X) can easily be changed to atan2(Y,X) by putting a '2' between the 'n' and the '(', and replacing the '/' with a ',', only 2 operations. The opposite order would take 4 operations and some of the operations would be more complex (cut and paste).
I often work out my math in Excel then port it to .NET, so will get hung up on atan2 sometimes. It would be best if atan2 could be standardized one way or the other.
It would be more convenient if atan2 had its arguments reversed. Then you wouldn't need to worry about flipping the arguments when computing polar angles. The Mathematica equivalent does just that: https://reference.wolfram.com/language/ref/ArcTan.html
Way back in the dawn of time, FORTRAN had an ATAN2 function with the less convenient argument order that, in this reference manual, is (somewhat inaccurately) described as arctan(arg1 / arg2).
It is plausible that the initial creator was fixated on atan2(arg1, arg2) being (more or less) arctan(arg1 / arg2), and that the decision was blindly copied from FORTRAN to C to C++ and Python and Java and JavaScript.