How to work with the result of the wild sympy - wildcard

I have the following code:
f=tan(x)*x**2
q=Wild('q')
s=f.match(tan(q))
s={q_ : x}
How to work with the result of the "wild"? How to not address the array, for example, s[0], s{0}?

Wild can be used when you have an expression which is the result of some complicated calculation, but you know it has to be of the form sin(something) times something else. Then s[q] will be the sympy expression for the "something". And s[p] for the "something else". This could be used to investigate both p and q. Or to further work with a simplified version of f, substituting p and q with new variables, especially if p and q would be complex expressions involving multiple variables.
Many more use cases are possible.
Here is an example:
from sympy import *
from sympy.abc import x, y, z
p = Wild('p')
q = Wild('q')
f = tan(x) * x**2
s = f.match(p*tan(q))
print(f'f is the tangent of "{s[q]}" multiplied by "{s[p]}"')
g = f.xreplace({s[q]: y, s[p]:z})
print(f'f rewritten in simplified form as a function of y and z: "{g}"')
h = s[p] * s[q]
print(f'a new function h, combining parts of f: "{h}"')
Output:
f is the tangent of "x" multiplied by "x**2"
f rewritten in simplified form as a function of y and z: "z*tan(y)"
a new function h, combining parts of f: "x**3"
If you're interested in all arguments from tan that appear in f written as a product, you might try:
from sympy import *
from sympy.abc import x
f = tan(x+2)*tan(x*x+1)*7*(x+1)*tan(1/x)
if f.func == Mul:
all_tan_args = [a.args[0] for a in f.args if a.func == tan]
# note: the [0] is needed because args give a tupple of arguments and
# in the case of tan you'ld want the first (there is only one)
elif f.func == tan:
all_tan_args = [f.args[0]]
else:
all_tan_args = []
prod = 1
for a in all_tan_args:
prod *= a
print(f'All the tangent arguments are: {all_tan_args}')
print(f'Their product is: {prod}')
Output:
All the tangent arguments are: [1/x, x**2 + 1, x + 2]
Their product is: (x + 2)*(x**2 + 1)/x
Note that neither method would work for f = tan(x)**2. For that, you'ld need to write another match and decide whether you'ld want to take the same power of the arguments.

Related

Julia: Why ridge regression not working (optim)

I am trying to implement ridge-regression from scratch in Julia but something is going wrong.
# Imports
using DataFrames
using LinearAlgebra: norm, I
using Optim: optimize, LBFGS, minimizer
# Read Data
out = CSV.read(download("https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv"), DataFrame, header=0)
# Separate features and response
y = Vector(out[:, end])
X = Matrix(out[:, 1:(end-1)])
λ = 0.1
# Functions
loss(beta) = norm(y - X * beta)^2 + λ*norm(beta)^2
function grad!(G, beta)
G = -2*transpose(X) * (y - X * beta) + 2*λ*beta
end
function hessian!(H, beta)
H = X'X + λ*I
end
# Optimization
start = randn(13)
out = optimize(loss, grad!, hessian!, start, LBFGS())
However, the result of this is terrible and we essentially get back start since it is not moving. Of course, I know I could simply use (X'X + λ*I) \ X'y or IterativeSolvers.lmsr(X, y) but I would like to implement this myself.
The problem is with the implementation of the grad! and hessian! functions: you should use dot assignment to change the content of the G and H matrices:
G .= -2*transpose(X) * (y - X * beta) + 2*λ*beta
H .= X'X + λ*I
Without the dot you replace the matrix the function parameter refers to, but the matrix passed to the function (which will then be used by the optimizer) remains unchanged (presumably a zero matrix, that's why you got back the start vector).

Reassign function and avoid recursive definition in Julia

I need to operate on a sequence of functions
h_k(x) = (I + f_k( ) )^k g(x)
for each k=1,...,N.
A basic example (N=2, f_k=f) is the following:
f(x) = x^2
g(x) = x
h1(x) = g(x) + f(g(x))
h2(x) = g(x) + f(g(x)) + f(g(x) + f(g(x)))
println(h1(1)) # returns 2
println(h2(1)) # returns 6
I need to write this in a loop and it would be best to redefine g(x) at each iteration. Unfortunately, I do not know how to do this in Julia without conflicting with the syntax for a recursive definition of g(x). Indeed,
f(x) = x^2
g(x) = x
for i=1:2
global g(x) = g(x) + f(g(x))
println(g(1))
end
results in a StackOverflowError.
In Julia, what is the proper way to redefine g(x), using its previous definition?
P.S. For those who would suggest that this problem could be solved with recursion: I want to use a for loop because of how the functions f_k(x) (in the above, each f_k = f) are computed in the real problem that this derives from.
I am not sure if it is best, but a natural approach is to use anonymous functions here like this:
let
f(x) = x^2
g = x -> x
for i=1:2
l = g
g = x -> l(x) + f(l(x))
println(g(1))
end
end
or like this
f(x) = x^2
g = x -> x
for i=1:4
l = g
global g = x -> l(x) + f(l(x))
println(g(1))
end
(I prefer the former option using let as it avoids using global variables)
The issue is that l is a loop local variable that gets a fresh binding at each iteration, while g is external to the loop.
You might also check out this section of the Julia manual.

How I display a math function in Julia?

I'm new in Julia and I'm trying to learn to manipulate calculus on it. How do I do if I calculate the gradient of a function with "ForwardDiff" like in the code below and see the function next?
I know if I input some values it gives me the gradient value in that point but I just want to see the function (the gradient of f1).
julia> gradf1(x1, x2) = ForwardDiff.gradient(z -> f1(z[1], z[2]), [x1, x2])
gradf1 (generic function with 1 method)
To elaborate on Felipe Lema's comment, here are some examples using SymPy.jl for various tasks:
#vars x y z
f(x,y,z) = x^2 * y * z
VF(x,y,z) = [x*y, y*z, z*x]
diff(f(x,y,z), x) # ∂f/∂x
diff.(f(x,y,z), [x,y,z]) # ∇f, gradiant
diff.(VF(x,y,z), [x,y,z]) |> sum # ∇⋅VF, divergence
J = VF(x,y,z).jacobian([x,y,z])
sum(diag(J)) # ∇⋅VF, divergence
Mx,Nx, Px, My,Ny,Py, Mz, Nz, Pz = J
[Py-Nz, Mz-Px, Nx-My] # ∇×VF
The divergence and gradient are also part of SymPy, but not exposed. Their use is more general, but cumbersome for this task. For example, this finds the curl:
import PyCall
PyCall.pyimport_conda("sympy.physics.vector", "sympy")
RF = sympy.physics.vector.ReferenceFrame("R")
v1 = get(RF,0)*get(RF,1)*RF.x + get(RF,1)*get(RF,2)*RF.y + get(RF,2)*get(RF,0)*RF.z
sympy.physics.vector.curl(v1, RF)

In Z3Py, prove returns no counterexample

How can Z3 return a valid counterexample?
The following code
from z3 import *
set_param(proof=True)
x = Real('x')
f = ForAll(x, x * x > 0)
prove(f)
outputs counterexample [].
I don't have to use prove, but I want to find a valid counterexample to a formula like f in the example. How can I do it?
To get a model, you should really use check, and assert the negation of your formula in a solver context:
from z3 import *
s = Solver()
x = Real('x')
f = x * x > 0
# Add negation of our formula
# So, if it's not valid, we'll get a model
s.add(Not(f))
print s.check()
print s.model()
This produces:
sat
[x = 0]

Symbolic math in sage: (a * x * y + z).subs(x + y == b)

Given a simple term a * x * y + b, I would like to substitute sub-terms, such as x * y by a placeholder c. I do this the following way
sage: a,b,c,x,y = var('a,b,c,x,y')
sage: expr = a * x * y + c
sage: expr.subs(x * y == b)
From that, I would expect expr to be a * b + c. Instead, it remains
the same. The result is:
a*x*y + c
I've come across the wild function, but it has not become clear to me
what it actually does.
I now use sympy in Python 3.6 rather than Sage, but this should be similar. Let me know if the translation to Sage doesn't work well.
The subs method does not change its object; it returns the expression after substitution, but you have to store the result. Your line expr.subs(x * y == b) may show the result but that result is then thrown away since you did not store it into any variable.
from sympy import symbols
a,b,c,x,y = symbols('a,b,c,x,y')
expr = a * x * y + c
newexpr = expr.subs(x*y, b)
print(newexpr)
The resulting printout is as you expect:
a*b + c
For confirmation of how subs() works in Sage, find subs( in this Sage documentation page and note the phrase "The polynomial itself is not affected."

Resources