I've written a rudimentary algorithm in Fortran 95 to calculate the gradient of a function (an example of which is prescribed in the code) using central differences augmented with a procedure known as Richardson extrapolation.
function f(n,x)
! The scalar multivariable function to be differentiated
integer :: n
real(kind = kind(1d0)) :: x(n), f
f = x(1)**5.d0 + cos(x(2)) + log(x(3)) - sqrt(x(4))
end function f
!=====!
!=====!
!=====!
program gradient
!==============================================================================!
! Calculates the gradient of the scalar function f at x=0using a finite !
! difference approximation, with a low order Richardson extrapolation. !
!==============================================================================!
parameter (n = 4, M = 25)
real(kind = kind(1d0)) :: x(n), xhup(n), xhdown(n), d(M), r(M), dfdxi, h0, h, gradf(n)
h0 = 1.d0
x = 3.d0
! Loop through each component of the vector x and calculate the appropriate
! derivative
do i = 1,n
! Reset step size
h = h0
! Carry out M successive central difference approximations of the derivative
do j = 1,M
xhup = x
xhdown = x
xhup(i) = xhup(i) + h
xhdown(i) = xhdown(i) - h
d(j) = ( f(n,xhup) - f(n,xhdown) ) / (2.d0*h)
h = h / 2.d0
end do
r = 0.d0
do k = 3,M r(k) = ( 64.d0*d(k) - 20.d0*d(k-1) + d(k-2) ) / 45.d0
if ( abs(r(k) - r(k-1)) < 0.0001d0 ) then
dfdxi = r(k)
exit
end if
end do
gradf(i) = dfdxi
end do
! Print out the gradient
write(*,*) " "
write(*,*) " Grad(f(x)) = "
write(*,*) " "
do i = 1,n
write(*,*) gradf(i)
end do
end program gradient
In single precision it runs fine and gives me decent results. But when I try to change to double precision as shown in the code, I get an error when trying to compile claiming that the assignment statement
d(j) = ( f(n,xhup) - f(n,xhdown) ) / (2.d0*h)
is producing a type mismatch real(4)/real(8). I have tried several different declarations of double precision, appended every appropriate double precision constant in the code with d0, and I get the same error every time. I'm a little stumped as to how the function f is possibly producing a single precision number.
The problem is that f is not explicitely defined in your main program, therefore it is implicitly assumed to be of single precision, which is the type real(4) for gfortran.
I completely agree to the comment of High Performance Mark, that you really should use implicit none in all your fortran code, to make sure all object are explicitely declared. This way, you would have obtained a more appropriate error message about f not being explicitely defined.
Also, you could consider two more things:
Define your function within a module and import that module in the main program. It is a good practice to define all subroutines/functions within modules only, so that the compiler can make extra checks on number and type of the arguments, when you invoke the function.
You could (again in module) introduce a constant for the precicision and use it everywhere, where the kind of a real must be specified. Taking the example below, by changing only the line
integer, parameter :: dp = kind(1.0d0)
into
integer, parameter :: dp = kind(1.0)
you would change all your real variables from double to single precision. Also note the _dp suffix for the literal constants instead of the d0 suffix, which would automatically adjust their precision as well.
module accuracy
implicit none
integer, parameter :: dp = kind(1.0d0)
end module accuracy
module myfunc
use accuracy
implicit none
contains
function f(n,x)
integer :: n
real(dp) :: x(n), f
f = 0.5_dp * x(1)**5 + cos(x(2)) + log(x(3)) - sqrt(x(4))
end function f
end module myfunc
program gradient
use myfunc
implicit none
real(dp) :: x(n), xhup(n), xhdown(n), d(M), r(M), dfdxi, h0, h, gradf(n)
:
end program gradient
Related
How do I evaluate the function in only one of its variables, that is, I hope to obtain another function after evaluating the function. I have the following piece of code.
deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
fun (4, y)
I hope to get 16-3y ^ 2 + 4y ^ 3
If what you want to do is to write x = f(4,y), and later just do x(2) to get -36, that is called partial application:
Intuitively, partial function application says "if you fix the first arguments of the function, you get a function of the remaining arguments".
This is a very useful feature, and very common Functional Programming Languages, such as Haskell, but even JS and Python now are able to do it. It is also possible to do this in MATLAB and GNU/Octave using anonymous functions (see this answer). In Scilab, however, this feature is not available.
Workround
Nonetheless, Scilab itself uses a workarounds to carry a function with its arguments without fully evaluating. You see this being used in ode(), fsolve(), optim(), and others:
Create a list containing the function and the arguments to partial evaluation: list(f,arg1,arg2,...,argn)
Use another function to evaluate such list and the last argument: evalPartList(list(...),last_arg)
The implementation of evalPartList() can be something like this:
function y = evalPartList(fList,last_arg)
//fList: list in which the first element is a function
//last_arg: last argument to be applied to the function
func = fList(1); //extract function from the list
y = func(fList(2:$),last_arg); //each element of the list, from second
//to last, becomes an argument
endfunction
You can test it on Scilab's console:
--> deff ('[F] = fun (x, y)', 'F = x ^ 2-3 * y ^ 2 + x * y ^ 3');
--> x = list(fun,4)
x =
x(1)
[F]= x(1)(x,y)
x(2)
4.
--> evalPartList(x,2)
ans =
36.
This is a very simple implementation for evalPartList(), and you have to be careful not to exceed or be short on the number of arguments.
In the way you're asking, you can't.
What you're looking is called symbolic (or formal) computational mathematics, because you don't pass actual numerical values to functions.
Scilab is numerical software so it can't do such thing. But there is a toolbox scimax (installation guide) that rely on a the free formal software wxmaxima.
BUT
An ugly, stupid but still sort of working solution is to takes advantages of strings :
function F = fun (x, y) // Here we define a function that may return a constant or string depending on the input
fmt = '%10.3E'
if (type(x)==type('')) & (type(y)==type(0)) // x is string is
ys = msprintf(fmt,y)
F = x+'^2 - 3*'+ys+'^2 + '+x+'*'+ys+'^3'
end
if (type(y)==type('')) & (type(x)==type(0)) // y is string so is F
xs = msprintf(fmt,x)
F = xs+'^2 - 3*'+y+'^2 + '+xs+'*'+y+'^3'
end
if (type(y)==type('')) & (type(x)==type('')) // x&y are strings so is F
F = x+'^2 - 3*'+y+'^2 + '+x+'*'+y+'^3'
end
if (type(y)==type(0)) & (type(x)==type(0)) // x&y are constant so is F
F = x^2 - 3*y^2 + x*y^3
end
endfunction
// Then we can use this 'symbolic' function
deff('F2 = fun2(y)',' F2 = '+fun(4,'y'))
F2=fun2(2) // does compute fun(4,2)
disp(F2)
I am new to Fortran and I have the following code, it basically solves a simple quadratic equation and output the solutions
!solving ax^2 + bx + c = 0
program output
implicit none
real :: a,b,c
character :: response
do
print*, 'Enter three coefficients a,b and c'
read *, a,b,c
call quad(a,b,c)
print*, 'Press Y to continue. Anykey for otherwise'
read *, response
if ( response /= 'y' .and. response /= 'Y') stop
end do
end program output
!Function to calculate Xs
subroutine quad(a,b,c)
implicit none
real :: a,b,c,xplus, xminus
xplus = ((-b)+sqrt((b**2)-(4*a*c)))/(2*a)
xminus = ((-b)-sqrt((b**2)-(4*a*c)))/(2*a)
if (xplus == xminus) then
print*, 'There exists 1 root only: '
write(*,12) xplus
12 format(2f10.2)
else
print*, 'Solutions of quadratic equation are'
write(*,10) xplus, xminus
10 format(1f10.5)
end if
end subroutine quad
This works. However, how would I go about with complex solutions. I.e, how would this line change, to make the format for complex numbers.
10 format(1f10.5)
Thank you so much.
I cannot solve a problem in Scilab because it get stucked because of round-off errors. I get the message
!--error 9999
Error: Round-off error detected, the requested tolerance (or default) cannot be achieved. Try using bigger tolerances.
at line 2 of function scalpol called by :
at line 7 of function gram_schmidt_pol called by :
gram_schmidt_pol(a,-1/2,-1/2)
It's a Gram Schmidt process with the integral of the product of two functions and a weight as the scalar product, between -1 and 1.
gram_schmidt_pol is the process specially designed for polynome, and scalpol is the scalar product described for polynome.
The a and b are parameters for the weigth, which is (1+x)^a*(1-x)^b
The entry is a matrix representing a set of vectors, it works well with the matrix [[1;2;3],[4;5;6],[7;8;9]], but it fails with the above message error on matrix eye(2,2), in addition to this, I need to do it on eye(9,9) !
I have looked for a "tolerance setting" in the menus, there is some in General->Preferences->Xcos->Simulation but I believe this is not for what I wan't, I have tried low settings (high tolerance) in it and it hasn't change anything.
So how can I solve this rounf-off problem ?
Feel free to tell me my message lacks of clearness.
Thank you.
Edit: Code of the functions :
// function that evaluate a polynomial (vector of coefficients) in x
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x^(i-1)
end
endfunction
// weight function evaluated in x, parametrized by a and b
// (poids = weight in french)
function [y] = poids(x, a, b)
y = (1-x)^a*(1+x)^b
endfunction
// scalpol compute scalar product between polynomial p1 and p2
// using integrate, the weight and the pol functions.
function [s] = scalpol(p1, p2, a, b)
s = integrate('poids(x,a, b)*pol(p1,x)*pol(p2,x)', 'x', -1, 1)
endfunction
// norm associated to scalpol
function [y] = normscalpol(f, a, b)
y = sqrt(scalpol(f, f, a, b))
endfunction
// finally the gram schmidt process on a family of polynome
// represented by a matrix
function [o] = gram_schmidt_pol(m, a, b)
[n,p] = size(m)
o(1:n) = m(1:n,1)/(normscalpol(m(1:n,1), a, b))
for k = 2:p
s =0
for i = 1:(k-1)
s = s + (scalpol(o(1:n,i), m(1:n,k), a, b) / scalpol(o(1:n,i),o(1:n,i), a, b) .* o(1:n,i))
end
o(1:n,k) = m(1:n,k) - s
o(1:n,k) = o(1:n,k) ./ normscalpol(o(1:n,k), a, b)
end
endfunction
By default, Scilab's integrate routine tries to achieve absolute error at most 1e-8 and relative error at most 1e-14. This is reasonable, but its treatment of relative error does not take into account the issues that occur when the exact value is zero. (See How to calculate relative error when true value is zero?). For this reason, even the simple
integrate('x', 'x', -1, 1)
throws an error (in Scilab 5.5.1).
And this is what happens in the process of running your program: some integrals are zero. There are two solutions:
(A) Give up on the relative error bound, by specifying it as 1:
integrate('...', 'x', -1, 1, 1e-8, 1)
(B) Add some constant to the function being integrated, then subtract from the result:
integrate('100 + ... ', 'x', -1, 1) - 200
(The latter should work in most cases, though if the integral happens to be exactly -200, you'll have the same problem again)
The above works for gram_schmidt_pol(eye(2,2), -1/2, -1/2) but for larger, say, gram_schmidt_pol(eye(9,9), -1/2, -1/2), it throws the error "The integral is probably divergent, or slowly convergent".
It appears that the adaptive integration routine can't handle the functions of the kind you have. A fallback is to use the simple inttrap instead, which just applies the trapezoidal rule. Since at x=-1 and 1 the function poids is undefined, the endpoints have to be excluded.
function [s] = scalpol(p1, p2, a, b)
t = -0.9995:0.001:0.9995
y = poids(t,a, b).*pol(p1,t).*pol(p2,t)
s = inttrap(t,y)
endfunction
In order for this to work, other related functions must be vectorized (* and ^ changed to .* and .^ where necessary):
function [y] = pol(p, x)
y = 0
for i=1:length(p)
y = y + p(i)*x.^(i-1)
end
endfunction
function [y] = poids(x, a, b)
y = (1-x).^a.*(1+x).^b
endfunction
The result is guaranteed to work, though the precision may be a bit lower: you are going to get some numbers like 3D-16 which are actually zeros.
I wrote a recursive program on Fortran to calculate the combinations of npoints of ndim dimensions as follows. I first wrote this program on MATLAB and it was perfectly running. But in Fortran, my problem is that after the first iteration it is assigning absurd values for the list of points, with no explanation. Could somebody give me a hand?
PROGRAM MAIN
IMPLICIT NONE
INTEGER :: ndim, k, npontos, contador,i,iterate, TEST
integer, dimension(:), allocatable :: pontos
print*, ' '
print*, 'npoints?'
read *, npontos
print*, 'ndim?'
read *, ndim
k=1
contador = 1
open(450,file= 'combination.out',form='formatted',status='unknown')
write(450,100) 'Comb ','stat ',(' pt ',i,' ',i=1,ndim)
write(450,120) ('XXXXXXXXXX ',i=1,ndim+1)
allocate(pontos(ndim))
do i=1,4
pontos(i)=i
end do
TEST = iterate(pontos, ndim, npontos,k,contador)
end program MAIN
recursive integer function iterate(pontos, ndim, npontos, k,contador)
implicit NONE
integer, intent(in) :: ndim, k, npontos
integer,dimension(:) :: pontos
integer contador,inic,i,j,m
if (k.eq.ndim) then
inic=pontos(ndim)
do i = pontos(ndim),npontos
pontos(k)= i
write(*,*) pontos(:)
contador=contador+1
end do
pontos(ndim)= inic + 1
else
inic = pontos (k)
do j = pontos(k),(npontos-ndim+k)
pontos(k)=j
pontos= iterate(pontos, ndim, npontos, k+1,contador)
end do
end if
pontos(k)=inic+1;
if (pontos(k).gt.(npontos-ndim+k+1)) then
do m =k+1,ndim
pontos(m)=pontos(m-1)+1
end do
end if
end function iterate
There are too many issues in that code... I stopped debugging it. This is what I got so far, it's too much for a comment.
This doesn't make sense:
pontos= iterate(pontos, ndim, npontos, k+1,contador)
You are changing pontos inside iterate, and never set a return value within the function.
You need to a) provide a result statement for recursive functions (and actually set it) or b) convert it to a subroutine. Since you are modifying at least one dummy argument, you should go with b).
Since you are using assumed-shape dummy arguments, you need to specify an interface to the function/subroutine, either explicitly or with a module.
Neither format 100 nor format 120 are specified in your code.
I have a program in R which calls a couple of Fortran routines, which are openMP-enabled. There are two Fortran routines sub_1 and sub_2. The first one is called twice in an R function, while the second is called once. Both routines are almost identical except for a few minor things. I call the first routine, then the second, then the first again. However, if I have both of them openMP-enabled, the function stops doing anything (doesn't have an error or stop execution, just sits there) when it gets to the second time it uses the first fortran routine.
If I disable the openMP in sub_1 then everything runs fine. If I instead disable the openMP in sub_2, then it again hangs in the same fashion on the second usage of sub_1. This is odd because it obviously gets through the first usage fine.
I thought it may be to do with the threads not closing properly or something (I don't know too much about openMP). However, another oddity is that the R function that calls these three routines is being called four times, and if I only enable openMP in sub_2, then this works fine (ie. the second, third etc. call to sub_2 doesn't hang). I just have no idea why it would do this! For reference, this is the code for sub_1:
subroutine correlation_dd_rad(s_bins,min_s,end_s,n,pos1,dd,r)
!!! INTENT IN !!!!!!!!
integer :: s_bins !Number of separation bins
integer :: N !Number of objects
real(8) :: pos1(3,N) !Cartesian Positions of particles
real(8) :: min_s !The smallest separation calculated.
real(8) :: end_s !The largest separation calculated.
real(8) :: r(N) !The radii of each particle (ascending)
!!! INTENT OUT !!!!!!!
real(8) :: dd(N,s_bins) !The binned data.
!!! LOCAL !!!!!!!!!!!!
integer :: i,j !Iterators
integer :: bin
real(8) :: d !Distance between particles.
real(8) :: dr,mins,ends
real(8),parameter :: pi = 3.14159653589
integer :: counter
dd(:,:) = 0.d0
dr = (end_s-min_s)/s_bins
!Perform the separation binning
mins = min_s**2
ends = end_s**2
counter = 1000
!$OMP parallel do private(d,bin,j)
do i=1,N
!$omp critical (count_it)
counter = counter - 1
!$omp end critical (count_it)
if(counter==0)then
counter = 1000
write(*,*) "Another Thousand"
end if
do j=i+1,N
if(r(j)-r(i) .GT. end_s)then
exit
end if
d=(pos1(1,j)-pos1(1,i))**2+&
&(pos1(2,j)-pos1(2,i))**2+&
&(pos1(3,j)-pos1(3,i))**2
if(d.LT.ends .AND. d.GT.mins)then
d = Sqrt(d)
bin = Floor((d-min_s)/dr)+1
dd(i,bin) = dd(i,bin)+1.d0
dd(j,bin) = dd(j,bin)+1.d0
end if
end do
end do
!$OMP end parallel do
write(*,*) "done"
end subroutine
Does anyone have any clue why this would happen??
Cheers.
I'll add in the smallest example that I can think of that does reproduce the problem (by the way, this must be an R problem - a small example of the type that I present here but written in fortran works fine). So I have the above code and the following code in fortran, compiled to the shared object correlate.so:
subroutine correlation_dr_rad(s_bins,min_s,end_s,n,pos1,n2,pos2,dd,r1,r2)
!!! INTENT IN !!!!!!!!
integer :: s_bins !Number of separation bins
integer :: N !Number of objects
integer :: n2
real(8) :: pos1(3,N) !Cartesian Positions of particles
real(8) :: pos2(3,n2) !random particles
real(8) :: end_s !The largest separation calculated.
real(8) :: min_s !The smallest separation
real(8) :: r1(N),r2(N2) !The radii of particles (ascending)
!!! INTENT OUT !!!!!!!
real(8) :: dd(N,s_bins) !The binned data.
!!! LOCAL !!!!!!!!!!!!
integer :: i,j !Iterators
integer :: bin
real(8) :: d !Distance between particles.
real(8) :: dr,mins,ends
real(8),parameter :: pi = 3.14159653589
integer :: counter
dd(:,:) = 0.d0
dr = (end_s-min_s)/s_bins
!Perform the separation binning
mins = min_s**2
ends = end_s**2
write(*,*) "Got just before parallel dr"
counter = 1000
!$OMP parallel do private(d,bin,j)
do i=1,N
!$OMP critical (count)
counter = counter - 1
!$OMP end critical (count)
if(counter==0)then
write(*,*) "Another thousand"
counter = 1000
end if
do j=1,N2
if(r2(j)-r1(i) .GT. end_s)then
exit
end if
d=(pos1(1,j)-pos2(1,i))**2+&
&(pos1(2,j)-pos2(2,i))**2+&
&(pos1(3,j)-pos2(3,i))**2
if(d.GT.mins .AND. d.LT.ends)then
d = Sqrt(d)
bin = Floor((d-min_s)/dr)+1
dd(i,bin) = dd(i,bin)+1.d0
end if
end do
end do
!$OMP end parallel do
write(*,*) "Done"
end subroutine
Then in R, I have the following functions - the first two just wrap the above fortran code. The third calls it in a similar way to my actual code:
correlate_dd_rad = function(pos,r,min_r,end_r,bins){
#A wrapper for the fortran routine of the same name.
dyn.load('correlate.so')
out = .Fortran('correlation_dd_rad',
s_bins = as.integer(bins),
min_s = as.double(min_r),
end_s = as.double(end_r),
n = as.integer(length(r)),
pos = as.double(t(pos)),
dd = matrix(0,length(r),bins), #The output matrix.
r = as.double(r))
dyn.unload('correlate.so')
return(out$dd)
}
correlate_dr_rad = function(pos1,r1,pos2,r2,min_r,end_r,bins){
#A wrapper for the fortran routine of the same name
N = length(r1)
N2 = length(r2)
dyn.load('correlate.so')
out = .Fortran('correlation_dr_rad',
s_bins = as.integer(bins),
min_s = as.double(min_r),
end_s = as.double(end_r),
n = N,
pos1 = as.double(t(pos1)),
n2 = N2,
pos2 = as.double(t(pos2)),
dr = matrix(0,nrow=N,ncol=bins),
r1 = as.double(r1),
r2 = as.double(r2))
dyn.unload('correlate.so')
return(out$dr)
}
the_calculation = function(){
#Generate some data to use
pos1 = matrix(rnorm(30000),10000,3)
pos2 = matrix(rnorm(30000),10000,3)
#Find the radii
r1 = sqrt(pos1[,1]^2 + pos1[,2]^2+pos1[,3]^2)
r2 = sqrt(pos2[,1]^2 + pos2[,2]^2+pos2[,3]^2)
#usually sort them but it doesn't matter here.
#Now call the functions
print("Calculating the data-data pairs")
dd = correlate_dd_rad(pos=pos1,r=r1,min_r=0.001,end_r=0.8,bins=15)
print("Calculating the data-random pairs")
dr = correlate_dr_rad(pos1,r1,pos2,r2,min_r=0.001,end_r=0.8,bins=15)
print("Calculating the random-random pairs")
rr = correlate_dd_rad(pos=pos2,r=r2,min_r=0.001,end_r=0.8,bins=15)
#Now we would do something with it but I don't care in this example.
print("Done")
}
Running this I get the output:
[1] "Calculating the data-data pairs"
Another Thousand
Another Thousand
Another Thousand
Another Thousand
Another Thousand
Another Thousand
Another Thousand
Another Thousand
Another Thousand
Another Thousand
done
[1] "Calculating the data-random pairs"
Got just before parallel dr
Another thousand
Another thousand
And then it just sits there... Actually, running it a few times has shown that it changes where it hangs each time. Sometimes it gets most of the way through the second call to correlate_dd_rad and others it only gets halfway through the call to correlate_dr_rad.
I am not sure if this will solve your problem, but it is indeed a bug. In subroutine correlation_dd_rad when you intended to close the parallel region, you actually put a comment. To be more clear the line that reads:
!OMP end parallel do
should be converted to:
!$OMP end parallel do
As side notes:
you don't need to use omp_lib if you don't call the library functions
you can use the atomic construct (see section 2.8.5 of the latest OpenMP specifications) to access a specific storage location atomically, instead of a critical construct
always give a name to critical constructs as (section 2.8.2 of the specifications)
All critical constructs without a name are considered to have the same unspecified name.