difference between exponential and log functions for numpy and math - math

This sounds like a naive question, but I can't figure out why there are two instances of functions like e, log etc., one for each numpy and math. For example numpy.e and math.e give me exactly the same result 2.71828....... What's the reason for this duplication?

numpy functions are called ufunc, you can use them on numpy array:
>>> import numpy
>>> numpy.exp (numpy.array([1, 2, 3]))
array([ 2.71828183, 7.3890561 , 20.08553692])
math functions are standard function (part of the standard python library), so they can be used only on standard types (such as int or float).
numpy functions are much more powerful than the math ones (when working on vector / matrix / etc.), but numpy is not a standard library.
If you check the type of the exp function, you get the following:
>>> type(numpy.exp)
numpy.ufunc
>>> type(math.exp)
builtin_function_or_method
Where you can see that numpy has defined its own exp function, whereas the math.exp function is builtin.
You cannot use them interchangeably at will: numpy.exp will work where math.exp works, but the inverse is not true (math.exp([1, 2, 3]) fails).

Related

How to convert a list or numpy array to a 1d torch tensor?

I have a list (or, a numpy array) of float values. I want to create a 1d torch tensor that will contain all those values. I can create the torch tensor and run a loop to store the values.
But I want to know is there any way, I can create a torch tensor with initial values from a list or array? Also suggest me if there is any pythonic way to achieve this as I am working in pytorch.
These are general operations in pytorch and available in the documentation. PyTorch allows easy interfacing with numpy. There is a method called from_numpy and the documentation is available here
import numpy as np
import torch
array = np.arange(1, 11)
tensor = torch.from_numpy(array)

TensorFlow apply a function to each row of a matrix variable

Hi I'm a newbie to Tensorflow. What I want to do is something like this in R:
mat = tf$Variable(matrix(1:4, nrow = 2))
apply(mat, 1, cumprod)
Is this do-able in Tensorflow, either in Python API or R tensorflow package? Thanks!
EDIT: tf$cumprod is actually what I want.
The TensorFlow Python API includes the tf.map_fn(fn, elems) higher-order operator, which allows you to specify a (Python) function fn that will be applied to each slice of elems in the 0th dimension (i.e. to each row if elems is a matrix).
Note that, while tf.map_fn() is very general, it may be more efficient to use specialized ops that either broadcast their arguments on one or more dimensions (e.g. tf.multiply()), or reduce in parallel across one or more dimensions (e.g. tf.reduce_sum()). However, tf.map_fn() is useful when there is no built-in operator to do what you want.

How to optimize with integer parameters in OpenMDAO

I am currently trying to do some optimization for locations on a map using OpenMDAO 1.7.2. The (preexisting) modules that do the calculations only support integer coordinates (resolution of one meter).
For now I am optimizing using an IndepVarComp for each direction each containing a float vector. These values are then rounded before using them, but this is quite inefficient because the solver mainly tries variations smaller below one.
When I attempt to initialize an IndepVarComp with an integer vector the first iteration works fine (uses inital values), but in the second iteration fails, because the data in IndepVarComp is set to an empty ndarray.
Looking through the OpenMDAO source code I found out that this is because
indep_var_comp._init_unknowns_dict['x']['size'] == 0
which happens in Component's _add_variable() method whenever the data type is not differentiable.
Here is an example problem which illustrates how defining an integer IndepVarComp fails:
from openmdao.api import Component, Group, IndepVarComp, Problem, ScipyOptimizer
INITIAL_X = 1
class ResultCalculator(Component):
def __init__(self):
super(ResultCalculator, self).__init__()
self.add_param('x', INITIAL_X)
self.add_output('y', 0.)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'] = (params['x'] - 3) ** 2 - 4
problem = Problem()
problem.root = Group()
problem.root.add('indep_var_comp', IndepVarComp('x', INITIAL_X))
problem.root.add('calculator', ResultCalculator())
problem.root.connect('indep_var_comp.x', 'calculator.x')
problem.driver = ScipyOptimizer()
problem.driver.options['optimizer'] = 'COBYLA'
problem.driver.add_desvar('indep_var_comp.x')
problem.driver.add_objective('calculator.y')
problem.setup()
problem.run()
Which fails with
ValueError: setting an array element with a sequence.
Note that everythings works out fine if I set INITIAL_X = 0..
How am I supposed to optimize for integers?
if you want to use integer variables, you need to pick a different kind of optimizer. You won't be able to force COBYLA to respect the integrality. Additionally, if you do have some kind of integer rounding causing discontinuities in your analyses then you really can't be using COBYLA (or any other continuous optimizer) at all. They all make a fundamental assumption about smoothness of the function which you would be violating.
It sounds like you should possibly consider using a particle swarm or genetic algorithm for your problem. Alternatively, you could focus on making the analyses smooth and differentiable and scale some of your inputs to get more reasonable resolution. You can also loosen the convergence tolerance of the optimizer to have it stop iterating once it gets below physical significance in your design variables.

Python pow operation withouth math import

My Python script uses the pow operation. During a tidy up some days ago, I removed the import math. I've just realised my pow operation is still working.
How can this be?
Ah, I get it. Python has a builtin pow function which works slightly differently to the math.pow one. It accepts an optional third parameter which causes it to return the modulo of the third parameter.
I was never using the math library version at all. I should have realised because I had never used the dot notation to get to the function!

numerical differentiation with Scipy

I was trying to learn Scipy, using it for mixed integrations and differentiations, but at the very initial step I encountered the following problems.
For numerical differentiation, it seems that the only Scipy function that works for callable functions is scipy.derivative() if I'm right!? However, I couldn't work with it:
1st) when I am not going to specify the point at which the differentiation is to be taken, e.g. when the differentiation is under an integral so that it is the integral that should assign the numerical values to its integrand's variable, not me. As a simple example I tried this code in Sage's notebook:
import scipy as sp
from scipy import integrate, derivative
var('y')
f=lambda x: 10^10*sin(x)
g=lambda x,y: f(x+y^2)
I=integrate.quad( sp.derivative(f(y),y, dx=0.00001, n=1, order=7) , 0, pi)[0]; show(I)
show( integral(diff(f(y),y),y,0,1).n() )
also it gives the warning that "Warning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated." and I don't know what does this warning stand for as it persists even with increasing "dx" and decreasing the "order".
2nd) when I want to find the derivative of a multivariable function like g(x,y) in the above example and something like sp.derivative(g(x,y),(x,0.5), dx=0.01, n=1, order=3) gives error, as is easily expected.
Looking forward to hearing from you about how to resolve the above cited problems with numerical differentiation.
Best Regards
There are some strange problems with your code that suggest you need to brush up on some python! I don't know how you even made these definitions in python since they are not legal syntax.
First, I think you are using an older version of scipy. In recent versions (at least from 0.12+) you need from scipy.misc import derivative. derivative is not in the scipy global namespace.
Second, var is not defined, although it is not necessary anyway (I think you meant to import sympy first and use sympy.var('y')). sin has also not been imported from math (or numpy, if you prefer). show is not a valid function in sympy or scipy.
^ is not the power operator in python. You meant **
You seem to be mixing up the idea of symbolic and numeric calculus operations here. scipy won't numerically differentiate an expression involving a symbolic object -- the second argument to derivative is supposed to be the point at which you wish to take the derivative (i.e. a number). As you say you are trying to do numeric differentiation, I'll resolve the issue for that purpose.
from scipy import integrate
from scipy.misc import derivative
from math import *
f = lambda x: 10**10*sin(x)
df = lambda x: derivative(f, x, dx=0.00001, n=1, order=7)
I = integrate.quad( df, 0, pi)[0]
Now, this last expression generates the warning you mentioned, and the value returned is not very close to zero at -0.0731642869874073 in absolute terms, although that's not bad relative to the scale of f. You have to appreciate the issues of roundoff error in finite differencing. Your function f varies on your interval between 0 and 10^10! It probably seems paradoxical, but making the dx value for differentiation too small can actually magnify roundoff error and cause numerical instability. See the second graph here ("Example showing the difficulty of choosing h due to both rounding error and formula error") for an explanation: http://en.wikipedia.org/wiki/Numerical_differentiation
In fact, in this case, you need to increase it, say to 0.001: df = lambda x: derivative(f, x, dx=0.001, n=1, order=7)
Then, you can integrate safely, with no terrible roundoff.
I=integrate.quad( df, 0, pi)[0]
I don't recommend throwing away the second return value from quad. It's an important verification of what happened, as it is "an estimate of the absolute error in the result". In this case, I == 0.0012846582250212652 and the abs error is ~ 0.00022, which is not bad (the interval that implies still does not include zero). Maybe some more fiddling with the dx and absolute tolerances for quad will get you an even better solution, but hopefully you get the idea.
For your second problem, you simply need to create a proper scalar function (call it gx) that represents g(x,y) along y=0.5 (this is called Currying in computer science).
g = lambda x, y: f(x+y**2)
gx = lambda x: g(x, 0.5)
derivative(gx, 0.2, dx=0.01, n=1, order=3)
gives you a value of the derivative at x=0.2. Naturally, the value is huge given the scale of f. You can integrate using quad like I showed you above.
If you want to be able to differentiate g itself, you need a different numerical differentiation functio. I don't think scipy or numpy support this, although you could hack together a central difference calculation by making a 2D fine mesh (size dx) and using numpy.gradient. There are probably other library solutions that I'm not aware of, but I know my PyDSTool software contains a function diff that will do that (if you rewrite g to take one array argument instead). It uses Ridder's method and is inspired from the Numerical Recipes pseudocode.

Resources