Would this be considered a fractal? [closed] - recursion

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I was wondering if this would be considered a fractal or just a recursive shape. It seems like it is to me, but our lab says "However, it is not just enough that the shape was generated by some recursive process, because there are shapes you could generate recursively which are not fractals," but it also explains more. To me, it seems like it is, but I just wanted to make sure.
Thank you very much!

No, it is not a fractal, because it doesn't demonstrate self-similarity. See: http://mathworld.wolfram.com/Fractal.html

A fractal has recursive properties, but not all recursive figures are fractals.
Here's what my rule of thumb is to decide whether a shape is a fractal or not:
Zoom into the object by a factor of X (say).
Count how many copies of the original object are in the zoomed-in version, let's call it N.
The dimension of the object is the logarithm of N, to the base X.
Eg: Zoom into a square by a factor of 2, you'll have 4 copies of the square that "fit inside" the larger square. Since log 4 (base 2) is 2, hence this is a 2D object.
Look at the Koch curve:
Zooming in 3x will give you 4 copies of the original curve, hence its "dimension" is log 4 (base 3), which is a number between 1 and 2... a fractional dimension, (hence the name fractal).
Applying this rule to your recursive figure, if you zoom in 2x, you will still see the original figure (N = 1). Its dimension works out to be log 1 (base 2), which is zero.
Since zero is not a fraction, therefore your figure is not a fractal.

Related

predict.lars command for lasso regression: what are the "s" and "p" parameters? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
In help(predict.lars) we can read that the parameter s is "a value, or vector of values, indexing the path. Its values depends on the mode= argument. By default (mode="step"), s should take on values between 0 and p (e.g., a step of 1.3 means .3 of the way between step 1 and 2.)"
What does "indexing the path" mean? Also, s must take a value between 1 and p, but what is p? The parameter p is not mentioned elsewhere in the help file.
I know this is basic, but there is not a single question up on SO about predict.lars.
It is easiest to use the mode="norm" option. In this case, s should just be your L1-regularization coefficient (\lambda).
To understand mode=step, you need to know a little more about the LARS algorithm.
One problem that LARS can solve is the L1-regularized regression problem: min ||y-Xw||^2+\lambda|w|, where y are the outputs, X is a matrix of input vectors, and w are the regression weights.
A simplified explanation of how LARS works is that it greedily builds a solution to this problem by adding or removing dimensions from the regression weight vector.
Each of these greedy steps can be interpreted as a solution to a L1 regularized problem with decreasing values of \lambda. The sequence of these steps is known as the path.
So, given the LARS path, to get the solution for a user-supplied \lambda, you iterate along the path until the next element is less than the input \lambda, then you take a partial step (\lambda decreases linearly between each step).

Finding private Key x Big integers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Well is it possible to find private key x for this equation y=g^x mod p of course big integers if you have p ,g, y, q?.
What method can be used if there is method to find it out? ..........Note:These are big Integers
This is called the discrete logarithm problem. You seem to be interested in the prime field special case of this problem.
For properly chosen fields with sufficiently large p this is infeasible. I expect this to be reasonably cheap (100$ or so) for 512 bit p and extremely expensive at 1024 bit p. Going beyond that it quicky becomes infeasible even for state level adversaries.
For some fields it's much cheaper. For example solving DL in binary fields (not prime fields as in your example) produced quite a few recent papers. For example Discrete logarithm in GF(2^809) with FFS and On the Function Field Sieve and the Impact of Higher Splitting Probabilities: Application to Discrete Logarithms in F_2^1971.

Simulate ARFIMA process with custom initial values [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My question deals with the fracdiff.sim function in R (in the fracdiff package) for which the help document, just like for arima.sim, is not really clear concerning initial values.
It's ok that stationary processes do not depend on their initial values when time grows, but my aim is to see in my simulations the return of my long memory process (fitted with arfima) to its mean.
Therefore, I need to input at least the p final values of my in-sample process (and eventually q innovations) if it is ARFIMA(p,d,q). In other words, I would like to set the burn-in period's length to 0 and give starting values instead.
Nevertheless, I'm currently not able to do this. I know that fracdiff.sim makes it possible for the user to chose the length of a burning period (which leads to the stationnary behavior) and the mean of the simulated process (it is simulated and then translated to make the means match). There is also a condition: the length of the burn-in period must be >= p+q. What I suppose is that there is something to do with the innov argument but I'm really not sure.
This idea is inspired by the arima.sim function which has a start.innov argument. However, even if my aim was to simulate an ARMA(p,q), I'm not sure of the exact use of this argument (the help is quite poor) : must we input only q innovations ? put with them the p last values of the in-sample process ? In which order ?
To sum up, I want to simulate ARFIMA processes starting from a specific value and having a specific mean in order to see the return to the mean and not only the long term behavior. I fund beginnings of solutions for arima.sim on the internet but nobody clearly answered, and if the solution uses start.innov, how to solve the problem for ARFIMA processes (fracdiff.sim doesn't have the start.innov argument) ?
Hopping I have been clear enough,

Divide 9×3 rect into 8 equal size square [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
You whip up your favorite brownie recipe and pour into your new 9×3 inch baking dish. The brownies bake. The toothpick comes out clean. Now for the cutting.
A square is the most delicious shape for a brownie. You have eight people to serve. How can you cut your newly baked creation into exactly eight square pieces?
So this is essentially a variation on a bin packing problem (which is well known to be NP-hard!).
One solution is to use 2 3x3 squares, 1 2x2 square and 5 1x1 squares, as follows:
The solution is obviously non-unique, since the positions of the various squares can be permuted around.
Due to the NP-hardness I imagine it would be difficult to come up with an efficient algorithm to divide a general NxM rectangle into k square pieces exactly. In fact there must be whole families of parameter values for which no solution is possible (for instance if you started with an 6x1 rectangle it would be impossible to divide into anything less than 6 squares...).

Uniform Random Selection with Replacement [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
Suppose you have a deck of 100 cards, with the numbers 1-100 on one side. You select a card, note the number, replace the card, shuffle, and repeat.
Question #1: How many cards (on average) must you select to have drawn the same card twice? Why?
Question #2: How many cards (on average) must you select to have drawn all of the cards at least once? Why?
(thanks, it has to do with random music playlists and making the option to not repeat the shuffle, as it were)
Q1: Relates to Birthday paradox problem
As you see in the collision problem section(in wikipedia link above), your question maps exactly.
Cast as a collision problem
The birthday problem can be generalized as follows: given n random integers drawn from a discrete uniform distribution with range [1,d], what is the probability p(n;d) that at least two numbers are the same? (d=365 gives the usual birthday problem.)
You have a range [1,100] from which you select random cards. The probability of collision(two selected cards are the same) is given as p(n;d) = ...
Further down, we have formula for average/expected number of selections as
Q(100) gives your answer.

Resources