I'm learning arctangent implementation in TMS320C55x
this is the source code:
;* AR0 assigned to _x
;* AR1 assigned to _r
;* T0 assigned to _nx
PSH T3
|| BSET FRCT ;fractional mode
SUB #1, T0 ;nx-1
MOV T0, BRC0 ;repeat nx times
MOV #2596 << #16, AC3 ; AC3.Hi = C5
MOV #-9464 << #16, AC1 ; AC1.Hi = C3
MOV #32617 << #16, AC2 ; AC2.Hi = C1
*
* Note: loading T3 on the instruction before a multiply that uses it will
* cause a 1-cycle delay.
*
MPYMR T3=*AR0+, AC3, AC0 ; (Prime the Pump)
|| RPTBLOCAL loop1-1
MACR AC0, T3, AC1, AC0
MPYR T3, AC0
||MOV *AR0+, T1 ; (for next iteration)
MACR AC0, T3, AC2, AC0
MPYR T3, AC0
||MOV T1, T3
MOV HI(AC0), *AR1+ ;save result
||MPYR T1, AC3, AC0 ; (for next iteration)
loop1:
POP T3
|| BCLR FRCT ;return to standard C
MOV #0, T0 ;return OK value (no possible error)
|| RET
where _x is vector of input and _r is output. nx is the number of elements.
The question is about constants that assigns to AC3, AC1, AC2. I guess it is coefficients for polynomial approximation but i don't understand how to calculate them
I do not follow the assembly code, but I could guess where those magic coefficients come from.
The code comments suggest C1, C3, C5 are coefficients of a polynomial approximation, and arctan is an odd function, so its Taylor expansion around 0 has indeed only odd powers of x. Comparing C1 = 32617 to 1 in the Taylor expansion y = x - 1/3 x^3 + 1/5 x^5 - 1/7 x^7 + ..., and given the computational context, this further suggests that the result of the calculation is scaled by 2^15 = 32768.
It turns out that y = (32617 x - 9464 x^3 + 2596 x^5) / 32768 is in fact a pretty good approximation of arctan(x) over the interval [-1, 1]. As shown below (verified in wolfram alpha) the largest absolute error of the approximation is less than 1/1000, and is negligible at the endpoints x = ±1 corresponding to y = ±π/4, which is probably desirable in graphics calculations.
As to how the coefficients were actually derived, a crude polynomial best-fit using just 9 control points gives a polynomial y = 32613 x - 9443 x^3 + 2573 x^5 with coefficients already close to the ones used in the posted code. More control points and/or additional conditions to minimize the error at the end points would result in slightly different coefficients, but it's hard to guess how to exactly match the ones in the code without any documentation or clues about the optimization criteria being actually used there.
I fully understand the how the RSA algorithm works, but now I am trying to reason with the formula. I want to know:
why the public key e and the private key d in the RSA encryption have to satisfy the equation ed = 1 mod (p − 1)(q − 1)?
Is it because of the standard modular arithmetic rule where 1 mod anything is 1, or is there a more to this answer?
Say you have a message called x and you want to encrypt it with your public key (pq, e). If you encrypt x, you get x^e mod pq. Someone who knows d can get x^(ed) mod pq.
Because ed = 1 mod (p - 1)(q - 1), by Fermat's little theorem, we get that x^(ed) mod pq = x, thus decrypting the message. If ed != 1 mod (p - 1)(q - 1), then the message could not be decrypted.
A link to Fermat's little theorem:
https://en.wikipedia.org/wiki/Fermat%27s_little_theorem
You know how flatmap takes a sequence of items and converts each one into a new subsequence, aggregating all the subsequences:
[A, B, C] -> [A1, A2, B1, B2, B3, C1]
Is there a name for the transform which does the opposite? Something like:
[A1, A2, B1, B2, B3, C1] -> [A, B, C]
The specific example that got me thinking about this was doing evaluation of mathematical expressions:
1 * 2 + 3 * 4 + 5 + 6 * 7 * 8
-> 2 + 12 + 5 + 6 + 336
-> 361
Individually, the evaluation of the 6 * 7 * 8 seems like a classic reduce step, while deciding which blocks need to be reduced would need repeated takeWhile steps.
I know how to do this in the classic iterative way, keeping track of indices and all that. For most cases, I've found a nice functional replacement for most iterative patterns. Is there a name for a single operation that does this, or a simple set of operations which can be composed to create this effect?
I think the opposite of flatmap is groupby.
$ python3
>>> from itertools import groupby
>>> groupby(['A1', 'A2', 'B1', 'B2', 'B3', 'C1'], lambda x: x[0])
To address the question we start with the following toy model problem being here just a case study:
Given two circles on a plane (its centers (c1 and c2) and radii (r1 and r2)) as well as a positive number r3, find all circles with radii = r3 (i.e all points c3 being centers of circles with radii = r3) tangent (externally and internally) to given two circles.
In general, depending on Circle[c1,r1], Circle[c2,r2] and r3 there are 0,1,2,...8 possible solutions. A typical case with 8 solutions :
I slightly modified a neat Mathematica implementation by Jaime Rangel-Mondragon on Wolfram Demonstration Project, but its core is similar:
Manipulate[{c1, a, c2, b} = pts;
{r1, r2} = Map[Norm, {a - c1, b - c2}];
w = Table[
Solve[{radius[{x, y} - c1]^2 == (r + k r1)^2,
radius[{x, y} - c2]^2 == (r + l r2)^2}
] // Quiet,
{k, -1, 1, 2}, {l, -1, 1, 2}
];
w = Select[
Cases[Flatten[{{x, y}, r} /. w, 2],
{{_Real, _Real}, _Real}
],
Last[#] > 0 &
];
Graphics[
{{Opacity[0.35], EdgeForm[Thin], Gray,
Disk[c1, r1], Disk[c2, r2]},
{EdgeForm[Thick], Darker[Blue,.5],
Circle[First[#], Last[#]]& /# w}
},
PlotRange -> 8, ImageSize -> {915, 915}
],
"None" -> {{pts, {{-3, 0}, {1, 0}, {3, 0}, {7, 0}}},
{-8, -8}, {8, 8}, Locator},
{{r, 0.3, "r3"}, 0, 8},
TrackedSymbols -> True,
Initialization :> (radius[z_] := Sqrt[z.z])
]
We can easily conclude that in a generic case we have an even number of solutions 0,2,4,6,8 while cases with an odd number of solutions 1,3,5,7 are exceptional - they are of zero measure in terms of control ranges. Thus changing in Manipulate c1, r1, c2, r2, r3 one can observe that it is much more difficult to track cases with an odd number of circles.
One could modify on a basic level the above approach : solving purely symbolically equations for c3 as well as redesignig Manipulate structure with an emphasis on changing number of solutions. If I'm not wrong Solve can work only numerically with Locator in Manipulate, however here Locator seems to be crucial for simplicity of controlling c1, r1, c2, r2 as well as for the whole implementation.
Let's state the questions, :
1. How can we force Manipulate to track seamlessly cases with an odd number of solutions (circles) ?
2. Is there any way to make Solve to find exact solutions of the underlying equations?
( I find the answer by Daniel Lichtblau to be the best approach to the question 2, but it seems in this instance there is still an essential need for sketching of a general technique of emphasizing measure zero sets of solutions while working with Manipulate )
These considerations are of less importance while dealing with exact solutions
For example Solve[x^2 - 3 == 0, x] yields {{x -> -Sqrt[3]}, {x -> Sqrt[3]}}
while in case from the above of slightly more difficult equations extracted from Manipulate setting the following arguments :
c1 = {-Sqrt[3], 0}; a = {1, 0}; c2 = {6 - Sqrt[3], 0}; b = {7, 0};
{r1, r2} = Map[ Norm, {a - c1, b - c2 }];
r = 2.0 - Sqrt[3];
to :
w = Table[Solve[{radius[{x, y} - {x1, y1}]^2 == (r + k r1)^2,
radius[{x, y} - {x2, y2}]^2 == (r + l r2)^2}],
{k, -1, 1, 2}, {l, -1, 1, 2}];
w = Select[ Cases[ Flatten[ {{x, y}, r} /. w, 2], {{_Real, _Real}, _Real}],
Last[#] > 0 &]
we get two solutions :
{{{1.26795, -3.38871*10^-8}, 0.267949}, {{1.26795, 3.38871*10^-8}, 0.267949}}
similarly under the same arguments and equations, putting :
r = 2 - Sqrt[3];
we get no solutions : {}
but in fact there is exactly one solution which we would like to emphasize:
{ {3 - Sqrt[3], 0 }, 2 - Sqrt[3] }
In fact, passing to Graphics such a small difference between two different solutions and the uniqe one is indistinguishable, however working with Manipulate we cannot track carefully with a desired accuracy merging of two circles and usually the last observed configuration when lowering r3 before vanishing all solutions (reminding so-called structural instability) looks like this :
Manipulate is rather a powerful tool, not only a toy, and its mastering could be very useful. The considered issues when appearing in a serious research are frequently critical, for example: in studying solutions of nonlinear differential equations, occurence of singularities in its solutions, qualitative behavior of dynamical systems, bifurcations, phenomena in Catastrophe theory and so on.
As this is a measure zero set, tools that require some granularity will generally have trouble with the concept. Perhaps better is to look for the singularity locus explicitly, where solutions have multiplicity or in other ways depart from the nearby solution behavior(s). It will be a part of the discriminant variety. In particular, you can grab the relevant part by setting your defining polynomials to zero and simultaneously making the Jacobian determinant zero.
Here is your example. I will eventually (wlog) put one center at the origin and the other at (1,0).
centers = Array[c, {2, 2}];
radii = Array[r, 3];
circ[cen_, rad_, x_, y_] := ({x, y} - cen).({x, y} - cen) - rad^2
I'll use your 'k' for both polynomials. Your formulation has pairs (k,l) where each is +-1. We can just use k, arrange by squaring to get a polynomial in k^2, and replace that with 1.
polys =
Table[Expand[
circ[centers[[j]], radii[[3]] + k*radii[[j]], x, y]], {j, 2}]
Out[18]= {x^2 + y^2 - 2 x c[1, 1] + c[1, 1]^2 - 2 y c[1, 2] +
c[1, 2]^2 - k^2 r[1]^2 - 2 k r[1] r[3] - r[3]^2,
x^2 + y^2 - 2 x c[2, 1] + c[2, 1]^2 - 2 y c[2, 2] + c[2, 2]^2 -
k^2 r[2]^2 - 2 k r[2] r[3] - r[3]^2}
We'll remove the part that is linear in k, square the rest, square that removed part, and equate the two. We also then replace k with unity.
p2 = polys - k*Coefficient[polys, k];
polys2 = Expand[p2^2 - (k*Coefficient[polys, k])^2] /. k -> 1;
We now get the determinant of the Jacobian and add that to the brew.
discrim = Det[D[polys2, #] & /# {x, y}];
allrelations = Join[polys2, {discrim}];
Now set the centers as noted earlier (could have done this from the beginning, one would suppose).
ar2 =
allrelations /. {c[1, 1] -> 0, c[1, 2] -> 0, c[2, 1] -> 0,
c[2, 2] -> 0}
Out[38]= {x^4 + 2 x^2 y^2 + y^4 - 2 x^2 r[1]^2 - 2 y^2 r[1]^2 +
r[1]^4 - 2 x^2 r[3]^2 - 2 y^2 r[3]^2 - 2 r[1]^2 r[3]^2 + r[3]^4,
x^4 + 2 x^2 y^2 + y^4 - 2 x^2 r[2]^2 - 2 y^2 r[2]^2 + r[2]^4 -
2 x^2 r[3]^2 - 2 y^2 r[3]^2 - 2 r[2]^2 r[3]^2 + r[3]^4, 0}
We now eliminate x and y to get the locus in r[1],r[2],r[3] parameter space that determines where we'll have multiplicity in our solutions.
gb = GroebnerBasis[ar2, radii, {x, y},
MonomialOrder -> EliminationOrder]
{r[1]^6 - 3 r[1]^4 r[2]^2 + 3 r[1]^2 r[2]^4 - r[2]^6 -
8 r[1]^4 r[3]^2 + 8 r[2]^4 r[3]^2 + 16 r[1]^2 r[3]^4 -
16 r[2]^2 r[3]^4}
If I did this all correctly, then we now have the polynomial defining the locus in parameter space where solution sets can get silly. Off this set they should never have multiplicity, and real counts should always be even. The intersection of this set with real space will be a 2d surface in the 3d space of the radii parameters. It will separate regions that have 0, 2, 4, 6, or 8 real solutions from one another.
Last, I'll point out that in this example the variety in question reduces nicely into a product of planes. I guess from a geometric view this is not too surprising.
Factor[gb[[1]]]
Out[43]= (r[1] - r[2]) (r[1] + r[2]) (r[1] - r[2] - 2 r[3]) (r[1] +
r[2] - 2 r[3]) (r[1] - r[2] + 2 r[3]) (r[1] + r[2] + 2 r[3])