Why does Fractal Formula "sin(z^2)-cos(z^2)+c" work with XaoS and not UltraFractal? - formula

While using the real-time Fractal zoomer XaoS to explore the infinite universe having user formula "sin(z^2)-cos(z^2)+c", we start out here (important for universe identification and formula matching with other zoomer software applications, libraries or frameworks.):
I found the following fractal, which I really like:
As you, perhaps, can see, I have run up against the resolution wall (floor?), which means that the deeper I zoom at that point, the more pixelated and boxy the image gets:
Here is the XaoS source code (*.xpf file contents):
;Position file automatically generated by XaoS 4.2.1
; - a realtime interactive fractal zoomer
;Use xaos -loadpos <filename> to display it
(initstate)
(defaultpalette 0)
(formula 'user)
(usrform "sin(z^2)-cos(z^2)+c")
(angle 90)
(maxiter 1000)
(view -0.241140329885861603 1.09425325699643758E-015 8.78234117374088186E-014 8.78234139599431772E-014)
And Fractal->View... yields:
So I tried to enter the formula into UltraFractal, but no matter what I try, I can't get it to work.
This works:
init:
z = #start
loop:
z = sin(z^#power) + cos(z^#power) + #pixel
This does not work:
init:
z = #start
loop:
z = sin(z^#power) - cos(z^#power) + #pixel
The only difference in the above two is the minus sign.
This does not work:
init:
z = #start
loop:
z2p = z^#power
ss = sin(z2p)
cc = cos(z2p)
z = ss - cc + #pixel
This works:
init:
z = #start
loop:
z2p = z^#power
ss = sin(z2p)
cc = cos(z2p)
z = ss + cc + #pixel
but if I add a line negating cc, it doesn't work again:
init:
z = #start
loop:
z2p = z^#power
ss = sin(z2p)
cc = cos(z2p)
cc = -cc
z = ss + cc + #pixel
Finally, one last example.
This works:
init:
z = #start
loop:
z2p = z^#power
ss = sin(z2p)
cc = cos(z2p)
z = ss + 1 - ( 1 - cc ) + #pixel
but this doesn't:
init:
z = #start
loop:
z2p = z^#power
ss = sin(z2p)
cc = cos(z2p)
z = ss + 1 - ( 1 + cc ) + #pixel
As I think I've proven, negation and subtraction seem to be working. And I keep changing only the minus sign or the subtraction to make it stop working, so I suspect that there is a bug (or an arbitrary limitation of not being able to use this particular formula? - doesn't make sense.)
In the "Fractal Mode" pane, lower right part of screen, I click the third item down, "Switch Mode", and it displays "Contains errors." This is the only helpful feedback I've been able to find.
This is a paid program, so I do not expect to be having this problem. Anyone? (secondary question - what zoomer will render this well? Thanks!)
Here is my current UltraFractal source code:
comment {
This file contains standard fractal types for Ultra Fractal. Many of the
fractal formulas here were written by other formula authors, as noted in the
comments with each formula. All formulas have been edited and simplified by
Frederik Slijkerman.
These formulas are also available as objects for the common.ulb framework in
Standard.ulb.
}
sin2_minus_cos2 {
;
; Generic Mandelbrot set.
;
init:
z = #start
z9 = 1
loop:
z2p = z^#power
ss = sin(z2p)
cc = cos(z2p)
cc = -cc
z = ss + cc + #pixel
bailout:
|z| <= #bailout
$IFDEF VER60
perturbinit:
#dz = 0
perturbloop:
if #power == (2, 0)
#dz = 2 * #z * #dz + sqr(#dz) + #dpixel
elseif #power == (3, 0)
complex z2 = sqr(#z)
complex dz2 = sqr(#dz)
#dz = 3 * z2 * #dz + 3 * #z * dz2 + #dz * dz2 + #dpixel
else ; power 4
complex z2 = sqr(#z)
complex dz2 = sqr(#dz)
complex zdz4 = 4*#z*#dz
#dz = #dpixel + zdz4*z2 + 6*z2*dz2 + zdz4*dz2 + sqr(dz2)
endif
$ENDIF
default:
title = "sin2_minus_cos2"
center = (-0.5, 0)
$IFDEF VER50
rating = recommended
$ENDIF
$IFDEF VER60
perturb = #power == (2, 0) || #power == (3, 0) || #power == (4, 0)
$ENDIF
param start
caption = "Starting point"
default = (0,0)
hint = "The starting point parameter can be used to distort the Mandelbrot \
set. Use (0, 0) for the standard Mandelbrot set."
endparam
param power
caption = "Power"
default = (2,0)
hint = "This parameter sets the exponent for the Mandelbrot formula. \
Increasing the real part to 3, 4, and so on, will add discs to \
the Mandelbrot figure. Non-integer real values and non-zero \
imaginary values will create distorted Mandelbrot sets. Use (2, 0) \
for the standard Mandelbrot set."
endparam
float param bailout
caption = "Bailout value"
default = 4.0
min = 1.0
$IFDEF VER40
exponential = true
$ENDIF
hint = "This parameter defines how soon an orbit bails out while \
iterating. Larger values give smoother outlines; values around 4 \
give more interesting shapes around the set. Values less than 4 \
will distort the fractal."
endparam
switch:
type = "Julia"
seed = #pixel
power = power
bailout = bailout
}
I just punched the formula into a zoomer on my Android, and it seems to work.
Here is that coordinate, zoomed out just a little, so that you can barely see the pixelated fractal (just another example of great beauty from this wonderful infinite universe!):
EDIT:
This formula works:
init:
z = #start
loop:
; in XaoS, user formula sin(z)^2-cos(z)^2+c
z = sin(z)^2 - cos(z)^2 + #pixel
And here's the result in UltraFractal:

Related

Double integration with a differentiation inside in R

I need to integrate the following function where there is a differentiation term inside. Unfortunately, that term is not easily differentiable.
Is this possible to do something like numerical integration to evaluate this in R?
You can assume 30,50,0.5,1,50,30 for l, tau, a, b, F and P respectively.
UPDATE: What I tried
InnerFunc4 <- function(t,x){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(x-t)}
InnerIntegral4 <- Vectorize(function(x) { integrate(InnerFunc4, 1, x, x = x)$value})
integrate(InnerIntegral4, 30, 80)$value
It shows the following error:
Error in integrate(InnerFunc4, 1, x, x = x) : non-finite function value
UPDATE2:
InnerFunc4 <- function(t,L){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(L-t)}
t_lower_bound = 0
t_upper_bound = 30
L_lower_bound = 30
L_upper_bound = 80
step_size = 0.5
integral = 0
t <- t_lower_bound + 0.5*step_size
while (t < t_upper_bound){
L = L_lower_bound + 0.5*step_size
while (L < L_upper_bound){
volume = InnerFunc4(t,L)*step_size**2
integral = integral + volume
L = L + step_size
}
t = t + step_size
}
Since It seems that your problem is only the derivative, you can get rid of it by means of partial integration:
Edit
Not applicable solution for lower integration bound 0.

How can I plot frequency response for a vibratory system with nonlinear differential equations?

I have a system of nonlinear differential equations for a 3 degree of freedom vibratory system.
system of differential equations
First I want to plot y, y_L and y_R against time (for a given value for Omega) and then I want to plot the domains (max values of y, y_L and y_R) against various amounts of Omega.
Unfortunately, I am not good at Octave. I have written the following code in Octave (based on a sample given by one of the users), but it ends with this error: "anonymous function bodies must be single expressions".
I would be grateful if anyone can help me.
Here is the code:
Me = 4000;
me = 20;
c = 2000;
c1 = 700;
c2 = 700;
k = 20000;
k1 = 250000;
k2 = 20000;
a0 = 0.01;
om = 25;
mu1 = (c+2*c2)/(Me);
mu2 = (c2)/(Me);
mu3 = (c1+c2)/(me);
mu4 = (c2)/(me);
w12 = (2*k2)/(Me);
w22 = (k1+k2)/(me);
a1 = (k2)/(me);
a2 = (k)/(Me);
F0 = (k1*a0)/(Me);
couplode = #(t,y) [y(2); mu4*y(4) - mu3*y(2) - w22*y(1) + a1*y(3) + F0*cos(om*t); y(4); mu2*(y(2)+y(6)) - mu1*y(4) - w12*y(3) + 0.5*w12*(y(1)+y(5)) + a2((y(3)).^3; y(6); mu4*y(4) - mu3*y(6) - w22*y(5) + a1*y(3) + F0*cos(om*t)];
[t,y] = ode45(couplode, [0 0.49*pi], [1;1;1;1;1;1]*1E-8);
figure(1)
plot(t, y)
grid
str = {'$$ \dot{y_L} $$', '$$ y_L $$', '$$ \dot{y} $$', '$$ y $$', '$$ \dot{y_R} $$', '$$ y_R $$'};
legend(str, 'Interpreter','latex', 'Location','NW')
You have a strange term rather at the end of the vector definition
... + a2((y(3)).^3
You certainly meant
... + a2*y(3).^3
You get better visibility and easier debugging by breaking that into separate lines
couplode = #(t,y) [ y(2);
mu4*y(4)-mu3*y(2)-w22*y(1)+a1*y(3)+F0*cos(om*t);
y(4);
mu2*(y(2)+y(6)) - mu1*y(4) - w12*y(3) + 0.5*w12*(y(1)+y(5)) + a2*y(3).^3;
y(6);
mu4*y(4)-mu3*y(6)-w22*y(5)+a1*y(3)+F0*cos(om*t)];
At least in this form, spaces or no spaces makes no difference. In general in matlab/octave [a +b -c] is the same as [a, +b, -c], so one has to be careful that the expression is not interpreted as matrix row. Spaces on both sites of the operation sign switches back to the single-expression interpretation.

Find nearest 3D point

I have two data files, each of them contain a big number of 3-dimensional points (file A stores approximately 50,000 points, file B stores approximately 500,000 points). My goal is to find for every point (a) in file A the point (b) in file B which has the smallest distance to (a). I store the points in two lists like this:
List A nodes:
(ID X Y Z)
[ ['478277', -107.0, 190.5674, 128.1634],
['478279', -107.0, 190.5674, 134.0172],
['478282', -107.0, 190.5674, 131.0903],
['478283', -107.0, 191.9798, 124.6807],
... ]
List B data:
(X Y Z Data)
[ [-28.102, 173.657, 229.744, 14.318],
[-28.265, 175.549, 227.824, 13.648],
[-27.695, 175.925, 227.133, 13.142],
...]
My first approach was to simply iterate through the first and second list with a nested loop and compute the distance between every points like this:
outfile = open(job[0] + '/' + output, 'wb');
dist_min = float(job[5]);
dist_max = float(job[6]);
dists = [];
for node in nodes:
shortest_distance = 1000.0;
shortest_data = 0.0;
for entry in data:
dist = math.sqrt((node[1] - entry[0])**2 + (node[2] - entry[1])**2 + (node[3] - entry[2])**2);
if (dist_min <= dist <= dist_max) and (dist < shortest_distance):
shortest_distance = dist;
shortest_data = entry[3];
outfile.write(node[0] + ', ' + str('%10.5f' % shortest_data + '\n'));
outfile.close();
I recognized that the amount of loops Python has to run is way too big (~25,000,000,000), so I had to fasten my code. I tried to first calculate all distances with list comprehensions but the code still is too slow:
p_x = [row[1] for row in nodes];
p_y = [row[2] for row in nodes];
p_z = [row[3] for row in nodes];
q_x = [row[0] for row in data];
q_y = [row[1] for row in data];
q_z = [row[2] for row in data];
dx = [[(px - qx) for px in p_x] for qx in q_x];
dy = [[(py - qy) for py in p_y] for qy in q_y];
dz = [[(pz - qz) for pz in p_z] for qz in q_z];
dx = [[dxxx * dxxx for dxxx in dxx] for dxx in dx];
dy = [[dyyy * dyyy for dyyy in dyy] for dyy in dy];
dz = [[dzzz * dzzz for dzzz in dzz] for dzz in dz];
D = [[(dx[i][j] + dy[i][j] + dz[i][j]) for j in range(len(dx[0]))] for i in range(len(dx))];
D = [[(DDD**(0.5)) for DDD in DD] for DD in D];
To be honest, at this point, I do not know which of the two approaches is better, anyway, none of the two possibilities seem feasible. I'm not even sure if it is possible to write a code which calculates all distances in an acceptable time. Is there even another way to solve my problem without calculating all distances?
Edit: I forgot to mention that I am running on Python 2.5.1 and am not allowed to install or add any new libraries...
Just in case someone is interrested in the solution:
I found a way to speed up the whole process by not calculating all distances:
I created a 3D-list, representing a grid in the given 3D space, divided in X, Y and Z in a given step size (e.g. (Max. - Min.) / 1,000). Then I iterated over every 3D point to put it into my grid. After that I iterated over the points of set A again, looking if there are points from B in the same cube, if not I would increase the search radius, so the process is looking in the adjacent 26 cubes for points. The radius is increasing until there is at least one point found. The resulting list is comparatively small and can be ordered in short time and the nearest point is found.
The processing time went down to a couple minutes and it is working fine.
p_x = [row[1] for row in nodes];
p_y = [row[2] for row in nodes];
p_z = [row[3] for row in nodes];
q_x = [row[0] for row in data];
q_y = [row[1] for row in data];
q_z = [row[2] for row in data];
min_x = min(p_x + q_x);
min_y = min(p_y + q_y);
min_z = min(p_z + q_z);
max_x = max(p_x + q_x);
max_y = max(p_y + q_y);
max_z = max(p_z + q_z);
max_n = max(max_x, max_y, max_z);
min_n = min(min_x, min_y, max_z);
gridcount = 1000;
step = (max_n - min_n) / gridcount;
ruler_x = [min_x + (i * step) for i in range(gridcount + 1)];
ruler_y = [min_y + (i * step) for i in range(gridcount + 1)];
ruler_z = [min_z + (i * step) for i in range(gridcount + 1)];
grid = [[[0 for i in range(gridcount)] for j in range(gridcount)] for k in range(gridcount)];
for node in nodes:
loc_x = self.abatemp_get_cell(node[1], ruler_x);
loc_y = self.abatemp_get_cell(node[2], ruler_y);
loc_z = self.abatemp_get_cell(node[3], ruler_z);
if grid[loc_x][loc_y][loc_z] is 0:
grid[loc_x][loc_y][loc_z] = [[node[1], node[2], node[3], node[0]]];
else:
grid[loc_x][loc_y][loc_z].append([node[1], node[2], node[3], node[0]]);
for entry in data:
loc_x = self.abatemp_get_cell(entry[0], ruler_x);
loc_y = self.abatemp_get_cell(entry[1], ruler_y);
loc_z = self.abatemp_get_cell(entry[2], ruler_z);
if grid[loc_x][loc_y][loc_z] is 0:
grid[loc_x][loc_y][loc_z] = [[entry[0], entry[1], entry[2], entry[3]]];
else:
grid[loc_x][loc_y][loc_z].append([entry[0], entry[1], entry[2], entry[3]]);
out = [];
outfile = open(job[0] + '/' + output, 'wb');
for node in nodes:
neighbours = [];
radius = -1;
loc_nx = self.abatemp_get_cell(node[1], ruler_x);
loc_ny = self.abatemp_get_cell(node[2], ruler_y);
loc_nz = self.abatemp_get_cell(node[3], ruler_z);
reloop = True;
while reloop:
if neighbours:
reloop = False;
radius += 1;
start_x = 0 if ((loc_nx - radius) < 0) else (loc_nx - radius);
start_y = 0 if ((loc_ny - radius) < 0) else (loc_ny - radius);
start_z = 0 if ((loc_nz - radius) < 0) else (loc_nz - radius);
end_x = (len(ruler_x) - 1) if ((loc_nx + radius + 1) > (len(ruler_x) - 1)) else (loc_nx + radius + 1);
end_y = (len(ruler_y) - 1) if ((loc_ny + radius + 1) > (len(ruler_y) - 1)) else (loc_ny + radius + 1);
end_z = (len(ruler_z) - 1) if ((loc_nz + radius + 1) > (len(ruler_z) - 1)) else (loc_nz + radius + 1);
for i in range(start_x, end_x):
for j in range(start_y, end_y):
for k in range(start_z, end_z):
if not grid[i][j][k] is 0:
for grid_entry in grid[i][j][k]:
if not isinstance(grid_entry[3], basestring):
neighbours.append(grid_entry);
dists = [];
for n in neighbours:
d = math.sqrt((node[1] - n[0])**2 + (node[2] - n[1])**2 + (node[3] - n[2])**2);
dists.append([d, n[3]]);
dists = sorted(dists);
outfile.write(node[0] + ', ' + str(dists[0][-1]) + '\n');
outfile.close();
Function to get the position of a point:
def abatemp_get_cell(self, n, ruler):
for i in range(len(ruler)):
if i >= len(ruler):
return False;
if ruler[i] <= n <= ruler[i + 1]:
return i;
The gridcount variable gives one the chance to fasten the process, with a small gridcount the process of sorting the points into the grid is very fast, but the lists of neighbours in the search loop gets bigger and more time is needed for this part of the process. With a big gridcount more time is needed at the beginning, however the loop runs faster.
The only issue I have now is the fact, that there are cases when the process found neighbours but there are other points, which are not yet found, but are closer to the point (see picture). So far I solved this issue by incrementing the search radius another time when there are already neigbours. And still then I have points which are closer but not in the neighbours list, although it's a very small amount (92 out of ~100,000). I could solve this problem by increment the radius two times after finding neighbours, but this solution seems not very smart. Maybe you guys have an idea...
This is the first working draft of the process, I think it will be possible to improve it even more, just to give you an idea of how it is working...
It took me a bit of thought but at the end I think I found a solution for you.
Your problem is not in the code you wrote but in the algorithm it implements.
There is an algorithm called Dijkstra's algorithm and here is the gist of it: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm .
Now what you need to do is to use this algorithm in a clever way:
create a node S (stand for source).
Now link edges from S to all the nodes in B group.
After you done that you should link edges from each point b in B to each point a in A.
You should set the cost of the links from the source to 0 and the other to the distance between 2 points (only in 3D).
Now if we will use Dijkstra's algorithm the output we will get would be the cost to travel from S to each point in the graph (we are only interested in the distance to points in group A).
So since the cost is 0 to each point b in B and S is only connected to points in B so the road to any point a in A must include a node in B (actually exactly one since the shortest distance between to points is a single line).
I am not sure if this will fasten your code but as far as I know, a way to solve this problem without calculating all distances does not exist and this algorithm is the best time complexity one could hope for.
take a look at this generic 3D data structure:
https://github.com/m4nh/skimap_ros
it has a very fast RadiusSearch feature just ready to be used. This solution (similar to Octree but faster) avoids to you to create the Regular Grid first (you don't have to fix MAX/MIN size along each axis) and you save a lot of memory

Implementing the Izhikevich neuron model

I'm trying to implement the spiking neuron of the Izhikevich model. The formula for this type of neuron is really simple:
v[n+1] = 0.04*v[n]^2 + 5*v[n] + 140 - u[n] + I
u[n+1] = a*(b*v[n] - u[n])
where v is the membrane potential and u is a recovery variable.
If v gets above 30, it is reset to c and u is reset to u + d.
Given such a simple equation I wouldn't expect any problems. But while the graph should look like , all I'm getting is this:
I'm completely at loss what I'm doing wrong exactly because there's so little to do wrong. I've looked for other implementations but the code I'm looking for is always hidden in a dll somewhere. However I'm pretty sure I'm doing exactly what the Matlab code of the author (2) is doing. Here is my full R code:
v = -70
u = 0
a = 0.02
b = 0.2
c = -65
d = 6
history <- c()
for (i in 1:100) {
if (v >= 30) {
v = c
u = u + d
}
v = 0.04*v^2 + 5*v + 140 - u + 0
u=a*(b*v-u);
history <- c(history, v)
}
plot(history, type = "l")
To anyone who's ever implemented an Izhikevich model, what am I missing?
usefull links:
(1) http://www.opensourcebrain.org/projects/izhikevichmodel/wiki
(2) http://www.izhikevich.org/publications/spikes.pdf
Answer
So it turns out I read the formula wrong. Apparently v' means new v = v + 0.04*v^2 + 5*v + 140 - u + I. My teachers would have written this as v' = 0.04*v^2 + 6*v + 140 - u + I. I'm very grateful for your help in pointing this out to me.
Take a look at the code that implements the Izhikevich model in R below. It results in the following R plots:
Regular Spiking Cell:
Chattering Cell:
And the R code:
# Simulation parameters
dt = 0.01 # ms
simtime = 500 # ms
t = 0
# Injection current
I = 15
delay = 100 # ms
# Model parameters (RS)
a = 0.02
b = 0.2
c = -65
d = 8
# Params for chattering cell (CH)
# c = -50
# d = 2
# Initial conditions
v = -80 # mv
u = 0
# Input current equation
current = function()
{
if(t >= delay)
{
return(I)
}
return (0)
}
# Model state equations
deltaV = function()
{
return (0.04*v*v+5*v+140-u+current())
}
deltaU = function()
{
return (a*(b*v-u))
}
updateState = function()
{
v <<- v + deltaV()*dt
u <<- u + deltaU()*dt
if(v >= 30)
{
v <<- c
u <<- u + d
}
}
# Simulation code
runsim = function()
{
steps = simtime / dt
resultT = rep(NA, steps)
resultV = rep(NA, steps)
for (i in seq(steps))
{
updateState()
t <<- dt*(i-1)
resultT[i] = t
resultV[i] = v
}
plot(resultT, resultV,
type="l", xlab = "Time (ms)", ylab = "Membrane Potential (mV)")
}
runsim()
Some notes:
I've picked the parameters for the "Regular Spiking (RS)" cell from Izhikevich's site. You can pick other parameters from the two upper-right plots on that page. Uncomment the CH parameters to get a plot for the "Chattering" type cell.
As commenters have suggested, the first two equations in the question are incorrectly implemented differential equations. The correct way to implement the first one would be something like: "v[n+1] = v[n] + (0.04*v[n]^2 + 5*v[n] + 140 - u[n] + I) * dt". See the code above for example. dt refers to the user specified time step integration variable and usually dt << 1 ms.
In the for loop in the question, the state variables u and v should be updated first, then the condition checked after.
As noted by others, a current source is needed for both of these cell types. I've used 15 (I believe these are pico amps) from this page on the author's site (bottom value for I in the screenshot). I've also implemented a delay for the current onset (100 ms parameter).
The simulation code should implement some kind of time tracking so it's easier to know when the spikes are occurring in resulting plot. The above code implements this, and runs the simulation for 500 ms.

simulate data from a linear fractional stable motion

I have to simulate some data from a Linear fractional stable motion. I have found an article where they simulate such data using Matlab. The code is from the article "Simulation methods for linear fractional stable motion and
FARIMA using the Fast Fourier Transform" by Stilian Stoev and Murad S. Taqqu. The following is the matlab code:
% Written by Stilian Stoev 05.06.2002, sstoev#math.bu.edu
%
% Usage:
% y = fftlfsn(H,alpha,m,M,C,N,n)
%
mh = 1/m;
d = H-1/alpha;
t0 = [mh:mh:1];
t1 = [1+mh:mh:M];
A = mh^(1/alpha)*[t0.^d, t1.^d-(t1-1).^d];
C = C*(sum(abs(A).^alpha)^(-1/alpha));
A = C*A;
Na = m*(M+N);
A = fft(A,Na);
y = [];
for i=1:n,
if alpha<2,
Z = rstab(alpha,0,Na)’;
elseif alpha==2,
Z = randn(1,Na);
end;
Z = fft(Z,Na);
w = real(ifft(Z.*A,Na));
y = [y; w(1:m:N*m)];
end;
Example:
The commands
H = 0.2; alpha =1.5; m = 256; M = 6000; N = 2^14 - M;
y = fftlfsn(H,alpha,m,M,1,N,1);
x = cumsum(y);
generate a simulated path y of length N of linear
fractional stable noise and a path x of LFSM.
In the following I have tried to translate it,
but I have some questions. I have commented on it in the code.
fftlfsn <- function(H,alpha,m,M,C,N,n){
mh = 1/m;
d = H-1/alpha;
t0 = seq(mh,mh, by =1);
t1 = seq(1+mh,mh, by=M);
# Is the following the right way to translate the matlab code into R?
A = mh^(1/alpha)*matrix(c(t0^d, t1^d-(t1-1)^d), ncol = length(t0), nrow = length(t1));
C = C*(sum(abs(A)^alpha)^(-1/alpha));
A = C*A;
Na = m*(M+N);
# I don't konw if it is right to use the function "fft" here.
#Does this respond directly to the function "fft" in matlab?
A = fft(A,Na);
#how can I do somthing similar in R?
#I think they create an empty matrix? Could I just write y=0?
y = [];
for (i in 1:n)
{
if(alpha<2){
# The function "rstab" generates symmetric alpha-stable variables. Is there a similar function in R, or do you know how to write one?
Z = t(rstab(alpha,0,Na))
}
else if(alpha==2){
Z = matrix (rnorm(Na, mean = 0, sd = 1), nrow = 1, ncol = Na)
}
# Again, can I just use the R-function "fft" directly?
Z = fft(Z,Na);
w = Re(fft(Z*A,Na, inverse= TRUE));
#I have trouble understanding the following and therefore I can't translate it.
y = [y; w(1:m:N*m)];
}
}
Any help appreciated!

Resources