How to make Sage simplify the expression $(x^y)^y$ to $x^(y*y)$ where both $x$ and $y$ are variables - sage

See question. I just want it to do a symbolic manipulation. I have tried several "simplification" "expand" "canonicalize_radical" and variants of it.
I also went through the all the listed functions in sage documentation, and tried every one that seemed relevant. None seem to change (x^y)^y to x^(y*y)).
Can I somehow tell Sage that x and y are "positive integers" so that various worries about branches of the roots are irrelevant here.

Related

Julia and Bessel functions

I am interested in Julia and would like to understand a couple of things before I dive into it. I would like to have a look at a working code which calculates this expression.
In that expression everything is a constant but for the Bessel functions, of course. The number n is an integer and "e" is an eccentricity (ranging from 0. to, say, 0.999).
For a given value of n I would like to derive hc,n. E.g. if n=2, then hc,2.
No, I am not tricking you into coding for me.
I am used to working with shell scripts, bc, and plot with gnuplot. I would like to have something more flexible than all of this and this one would be a good example to start looking at julia. Thanks!
For the best tutorial on doing equations/mathematics in Julia have a look at https://github.com/mossr/BeautifulAlgorithms.jl
This will give you an excellent overview along with the initial feelling of the language.

In how many functions maxterm/minterm of n binary variables can be expressed

I have confusion. Semantically we can construct 2^(2^n) boolean functions, but I read in Digital Electronics Morris Mano that we can construct 2^2n combinations of minterm/maxterm. How?
Samsamp, could you point to a specific place in the book or even better provide an exact quote? In the copy I found over the Internet I was not able to find such a claim after a fast glance over. The closest thing I found is:
Since the function can be either I or 0 for each minterm, and since there
are 2^n min terms, one can calculate the possible functions that can be formed with n variables to be 2^2^n.
which looks OK to me.

Wreath Product Of Groups In Sagemath

Can anyone help me with taking Wreath Products of Groups in Sagemath?
I haven't been able to find a online reference and it doesn't appear to be built in as far as I can tell.
As far as I know, you would have to use GAP to compute them within Sage (and then can manipulate them from within Sage as well). See e.g. this discussion from 2012. This question has information about it, here is the documentation, and here it is within Sage:
F = AbelianGroup(3,[2]*3)
G = PermutationGroup([[(1,2,3),(4,5)],[(3,4)]])
Gp = gap.StandardWreathProduct(gap(F),gap(G))
print Gp
However, if you try to get this back into Sage, you will get a NotImplementedError because Sage doesn't understand what GAP returns in this wacky case (which I hope even is legitimate). Presumably if a recognized group is returned then one could eventually get it back to Sage for further processing. In this case, you might be better off doing some GAP computations and then putting them back in Sage after doing all your group stuff (which isn't always the case).

Recursive hypothesis-building with ambiguites - what's it called?

There's a problem I've encountered a lot (in the broad fields of data analyis or AI). However I can't name it, probably because I don't have a formal CS background. Please bear with me, I'll give two examples:
Imagine natural language parsing:
The flower eats the cow.
You have a program that takes each word, and determines its type and the relations between them. There are two ways to interpret this sentence:
1) flower (substantive) -- eats (verb) --> cow (object)
using the usual SVO word order, or
2) cow (substantive) -- eats (verb) --> flower (object)
using a more poetic world order. The program would rule out other possibilities, e.g. "flower" as a verb, since it follows "the". It would then rank the remaining possibilites: 1) has a more natural word order than 2), so it gets more points. But including the world knowledge that flowers can't eat cows, 2) still wins. So it might return both hypotheses, and give 1) a score of 30, and 2) a score of 70.
Then, it remembers both hypotheses and continues parsing the text, branching off. One branch assumes 1), one 2). If a branch reaches a contradiction, or a ranking of ~0, it is discarded. In the end it presents ranked hypotheses again, but for the whole text.
For a different example, imagine optical character recognition:
** **
** ** *****
** *******
******* **
* ** **
** **
I could look at the strokes and say, sure this is an "H". After identifying the H, I notice there are smudges around it, and give it a slightly poorer score.
Alternatively, I could run my smudge recognition first, and notice that the horizontal line looks like an artifact. After removal, I recognize that this is ll or Il, and give it some ranking.
After processing the whole image, it can be Hlumination, lllumination or Illumination. Using a dictionary and the total ranking, I decide that it's the last one.
The general problem is always some kind of parsing / understanding. Examples:
Natural languages or ambiguous languages
OCR
Path finding
Dealing with ambiguous or incomplete user imput - which interpretations make sense, which is the most plausible?
I'ts recursive.
It can bail out early (when a branch / interpretation doesn't make sense, or will certainly end up with a score of 0). So it's probably some kind of backtracking.
It keeps all options in mind in light of ambiguities.
It's based off simple rules at the bottom can_eat(cow, flower) = true.
It keeps a plausibility ranking of interpretations.
It's recursive on a meta level: It can fork / branch off into different 'worlds' where it assumes different hypotheses when dealing with the next part of data.
It'll forward the individual rankings, probably using bayesian probability, to dependent hypotheses.
In practice, there will be methods to train this thing, determine ranking coefficients, and there will be cutoffs if the tree becomes too big.
I have no clue what this is called. One might guess 'decision tree' or 'recursive descent', but I know those terms mean different things.
I know Prolog can solve simple cases of this, like genealogies and finding out who is whom's uncle. But you have to give all the data in code, and it doesn't seem convienent or powerful enough to do this for my real life cases.
I'd like to know, what is this problem called, are there common strategies for dealing with this? Is there good literature on the topic? Are there libraries for ideally C(++), Python, were you can just define a bunch of rules, and it works out all the rankings and hypotheses?
I don't think there is one answer that fits all the bullet points you have. But I hope my links will lead you closer to an answer or might give you a different question.
I think the closest answer is Bayesian network since you have probabilities affecting each other as I understand it, it is also related to Conditional probability and Fuzzy Logic
You also describe a bit of genetic programming as well as Artificial Neural Networks
I can name drop some more topics which might be related:
http://en.wikipedia.org/wiki/Rule-based_programming
http://en.wikipedia.org/wiki/Expert_system
http://en.wikipedia.org/wiki/Knowledge_engineering
http://en.wikipedia.org/wiki/Fuzzy_system
http://en.wikipedia.org/wiki/Bayesian_inference

Function point to kloc ratio as a software metric... the "Name That Tune" metric?

What do you think of using a metric of function point to lines of code as a metric?
It makes me think of the old game show "Name That Tune". "I can name that tune in three notes!" I can write that functionality in 0.1 klocs! Is this useful?
It would certainly seem to promote library usage, but is that what you want?
I think it's a terrible idea. Just as bad as paying programmers by lines of code that they write.
In general, I prefer concise code over verbose code, but only as long as it still expresses the programmers' intention clearly. Maximizing function points per kloc is going to encourage everyone to write their code as briefly as they possibly can, which goes beyond concise and into cryptic. It will also encourage people to join adjacent lines of code into one line, even if said joining would not otherwise be desirable, just to reduce the number of lines of code. The maximum allowed line length would also become an issue.
KLOC is tolerable if you strictly enforce code standards, kind of like using page requirements for a report: no putting five statements on a single line or removing most of the whitespace from your code.
I guess one way you could decide how effective it is for your environment is to look at several different applications and modules, get a rough estimate of the quality of the code, and compare that to the size of the code. If you can demonstrate that code quality is consistent within your organization, then KLOC isn't a bad metric.
In some ways, you'll face the same battle with any similar metric. If you count feature or function points, or simply features or modules, you'll still want to weight them in some fashion. Ultimately, you'll need some sort of subjective supplement to the objective data you'll collect.
"What do you think of using a metric of function point to lines of code as a metric?"
Don't get the question. The above ratio is -- for a given language and team -- a simple statistical fact. And it tends toward a mean value with a small standard deviation.
There are lots of degrees of freedom: how you count function points, what language you're using, how (collectively) clever the team is. If you don't change those things, the value stays steady.
After a few projects together, you have a solid expectation that 1200 function points will be 12,000 lines of code in your preferred language/framework/team organization.
KSloc / FP is a bare statistical observation. Clearly, there's something else about this that's bothering you. Could you be more specific in your question?
The metric of Function Points to Lines of Code is actually used to generate the language level charts (actually, it is Function Points to Statements) to give an approximate sense of how powerful a programming language is. Here is an example: http://web.cecs.pdx.edu/~timm/dm/functionpoints.html
I wouldn't recommend using that ratio for anything else, except high level approximations like the language level chart.
Promoting library usage is a good thing, but the other thing to keep in mind is you will lose in the ratio when you are building the libraries, and will only pay it off with dividends of savings over time. Bean-counters won't understand that.
I personally would like to see a Function point to ABC metric ratio -- as I am curious about how the ABC metric (which indicates size and includes complexity as part of the info) would relate - perhaps linear, perhaps exponential, etc... www.softwarerenovation.com/ABCMetric.pdf
All metrics suck. My theory has always been that if you have to have them, then use the easiest thing you can to gather them and be done with it and onto important things.
That generally means something along the lines of
grep -c ";" *.h *.cpp | awk -F: '/:/ {x += $2} END {print x}'
If you are looking for a "metric" to track code efficency, don't. If you insist, again try something stupid but easy like source file size (see grep command above, w/o the awk pipe) or McCabe (with a counter program).

Resources