function for computing bicoherence - math

Dear all
I'm looking for a numpy/scipy function to compute bicoherence and auto-bicoherence fore the studying of 3-wave interaction.
Thank you for all the possible help
nicola

The best package for this in python land is http://pypi.python.org/pypi/nitime
It has several coherence estimators, but I didn't look very carefully at those. It is a package for neuroimaging, but the algorithms only use numpy and scipy, intentionally, so it can be used by other applications.

Perhaps this Matlab toolbox will help; it's quite easy to translate Matlab into Python, generally.

Here is a function that relies on the scipy.spectrogram function (scipy version > 0.17) and compute the bicoherence between two signals.
Definition from Hagihira 2001 and Hayashi 2007. See Wikipedia-bicoherence
Hope this helps.
Regards,
def compute_bicoherence(s1, s2, rate, nperseg=1024, noverlap=512):
""" Compute the bicoherence between two signals of the same lengths s1 and s2
using the function scipy.signal.spectrogram
"""
from scipy import signal
import numpy
# compute the stft
f1, t1, spec_s1 = signal.spectrogram(s1, fs = rate, nperseg = nperseg, noverlap = noverlap, mode = 'complex',)
f2, t2, spec_s2 = signal.spectrogram(s2, fs = rate, nperseg = nperseg, noverlap = noverlap, mode = 'complex')
# transpose (f, t) -> (t, f)
spec_s1 = numpy.transpose(spec_s1, [1, 0])
spec_s2 = numpy.transpose(spec_s2, [1, 0])
# compute the bicoherence
arg = numpy.arange(f1.size / 2)
sumarg = arg[:, None] + arg[None, :]
num = numpy.abs(
numpy.mean(spec_s1[:, arg, None] * spec_s1[:, None, arg] * numpy.conjugate(spec_s2[:, sumarg]),
axis = 0)
) ** 2
denum = numpy.mean(
numpy.abs(spec_s1[:, arg, None] * spec_s1[:, None, arg]) ** 2, axis = 0) * numpy.mean(
numpy.abs(numpy.conjugate(spec_s2[:, sumarg])) ** 2,
axis = 0)
bicoh = num / denum
return f1[arg], bicoh
# exemple of use and display
freqs, bicoh = compute_bicoherence(s1, s2, rate)
f = plt.figure(figsize = (9, 9))
plt.pcolormesh(freqs, freqs, bicoh,
# cmap = 'inferno'
)
plt.colorbar()
plt.clim(0, 0.5)
plt.show()

If you refer to normalized cross spectral density (as defined in wikipedia) then matplotlib.mlab.cohere would do the trick.

Related

MethodError: some problems with JuMP in Julia

I have some problems with JuMP. When I run it, it says:
MethodError: no method matching (::Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, Vector{Float64}, BSpline{Linear{Throw{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Linear{Throw{OnGrid}}}, Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}}}, BSpline{Linear{Throw{OnGrid}}}, Throw{Nothing}})(::AffExpr)
Use square brackets [] for indexing an Array.
thanks!
using JuMP
import Ipopt
β = 0.88
Nb = 1000
δ = 1.5
wage = 1
rate = 1
grid_b = range(0, 5, length = 1000)
w = 5 * (grid_b).^2
w_func = LinearInterpolation(grid_b, w)
choice1 = Model(Ipopt.Optimizer)
#variable(choice1, x >= 0)
#NLobjective(choice1, Max, x^δ/(1-δ) + β * (w_func.((grid_b[3]*(1+rate)+wage-x) * 3)))
optimize!(choice1)
If I try to run your code, I get
ERROR: UndefVarError: LinearInterpolation not defined
which package is that from? Also, what version of JuMP are you using?
You can’t use arbitrary functions in JuMP. You need to use a user-defined function:
https://jump.dev/JuMP.jl/stable/manual/nlp/#User-defined-Functions
But this will only work if it’s possible to automatically differentiate the function. I don’t know if that works with Interpolations.jl.
p.s. Please provide a link when posting the same question in multiple places: https://discourse.julialang.org/t/methoderror-no-method-matching-in-jump/66543/2

ST-HOSVD in Julia

I am trying to implement ST-HOSVD algorithm in Julia because I could not found library which contains ST-HOSVD.
See this paper in Algorithm 1 in page7.
https://people.cs.kuleuven.be/~nick.vannieuwenhoven/papers/01-STHOSVD.pdf
I cannot reproduce input (4,4,4,4) tensor by approximated tensor whose tucker rank is (2,2,2,2).
I think I have some mistake in indexes of matrix or tensor elements, but I could not locate it.
How to fix it?
If you know library of ST-HOSVD, let me know.
ST-HOSVD is really common way to reduce information. I hope the question helps many Julia user.
using TensorToolbox
function STHOSVD(A, reqrank)
N = ndims(A)
S = copy(A)
Sk = undef
Uk = []
for k = 1:N
if k == 1
Sk = tenmat(S, k)
end
Sk_svd = svd(Sk)
U1 = Sk_svd.U[ :, 1:reqrank[k] ]
V1t = Sk_svd.V[1:reqrank[k], : ]
Sigma1 = diagm( Sk_svd.S[1:reqrank[k]] )
Sk = Sigma1 * V1t
push!(Uk, U1)
end
X = ttm(Sk, Uk[1], 1)
for k=2:N
X = ttm(X, Uk[k], k)
end
return X
end
A = rand(4,4,4,4)
X = X_STHOSVD(A, [2,2,2,2])
EDIT
Here, Sk = tenmat(S, k) is mode n matricization of tensor S.
S∈R^{I_1×I_2×…×I_N}, S_k∈R^{I_k×(Π_{m≠k}^{N} I_m)}
The function is contained in TensorToolbox.jl. See "Basis" in Readme.
The definition of mode-k Matricization can be seen the paper in page 460.
It works.
I have seen 26 page in this slide
using TensorToolbox
using LinearAlgebra
using Arpack
function STHOSVD(T, reqrank)
N = ndims(T)
tensor_shape = size(T)
for i = 1 : N
T_i = tenmat(T, i)
if reqrank[i] == tensor_shape[i]
USV = svd(T_i)
else
USV = svds(T_i; nsv=reqrank[i] )[1]
end
T = ttm( T, USV.U * USV.U', i)
end
return T
end

Using bias in PyTorch for basic function approximation

Using R, it is very easy to approximate basic functions through a neural network:
library(nnet)
x <- sort(10*runif(50))
y <- sin(x)
nn <- nnet(x, y, size=4, maxit=10000, linout=TRUE, abstol=1.0e-8, reltol = 1.0e-9, Wts = seq(0, 1, by=1/12) )
plot(x, y)
x1 <- seq(0, 10, by=0.1)
lines(x1, predict(nn, data.frame(x=x1)), col="green")
predict( nn , data.frame(x=pi/2) )
A simple neural network with one hidden layer of a mere 4 neurons is sufficient to approximate a sine. (As per stackoverflow question Approximating function with Neural Network.)
But I cannot obtain the same in PyTorch.
In fact, the neural network created by R contains not only an input, four hidden and an output, but also two "bias" neurons - the first connected towards the hidden layer, the second towards the output.
The plot above is obtained through the following:
library(devtools)
library(scales)
library(reshape)
source_url('https://gist.github.com/fawda123/7471137/raw/cd6e6a0b0bdb4e065c597e52165e5ac887f5fe95/nnet_plot_update.r')
plot.nnet(nn$wts,struct=nn$n, pos.col='#007700',neg.col='#FF7777') ### this plots the graph
plot.nnet(nn$wts,struct=nn$n, pos.col='#007700',neg.col='#FF7777', wts.only=1) ### this prints the weights
Attempting the same with PyTorch produces a different network: the bias neurons are missing.
Following is an attempt to do in PyTorch what was done previously in R. The results will not be satisfactory: the function is not approximated. The most evident difference is that absence of the bias neurons.
import torch
from torch.autograd import Variable
import random
import math
N, D_in, H, D_out = 1000, 1, 4, 1
l_x = []
l_y = []
for a in range(1000):
r = random.random()*10
l_x.append( [r] )
l_y.append( [math.sin(r)] )
tx = torch.cuda.FloatTensor(l_x)
ty = torch.cuda.FloatTensor(l_y)
x = Variable(tx, requires_grad=False)
y = Variable(ty, requires_grad=False)
w1 = Variable(torch.randn(D_in, H ).type(torch.cuda.FloatTensor), requires_grad=True)
w2 = Variable(torch.randn(H, D_out).type(torch.cuda.FloatTensor), requires_grad=True)
learning_rate = 1e-5
for t in range(1000):
y_pred = x.mm(w1).clamp(min=0).mm(w2)
loss = (y_pred - y).pow(2).sum()
if t<10 or t%100==1: print(t, loss.data[0])
loss.backward()
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
t = [ [math.pi] ]
print( str(t) +" -> "+ str( (Variable(torch.cuda.FloatTensor( t ))).mm(w1).clamp(min=0).mm(w2).data ) )
t = [ [math.pi/2] ]
print( str(t) +" -> "+ str( (Variable(torch.cuda.FloatTensor( t ))).mm(w1).clamp(min=0).mm(w2).data ) )
How to make the network approximate to the given function (sine in this case), through either inserting the "bias" neurons or other missing detail?
Moreover: I have difficulties in understanding why R inserts the "bias". I found information that the bias could be akin to the "Intercept in a Regression Model" - I still find it not clear. Any information would be appreciated.
EDIT: an excellent explanation turned out to be at stackoverflow question Role of Bias in Neural Networks
EDIT:
An example to obtain the result, though using the "fuller" framework ("not reinventing the wheel") is as follows:
import torch
from torch.autograd import Variable
import torch.nn.functional as F
import math
N, D_in, H, D_out = 1000, 1, 4, 1
l_x = []
l_y = []
for a in range(1000):
t = (a/1000.0)*10
l_x.append( [t] )
l_y.append( [math.sin(t)] )
x = Variable( torch.FloatTensor(l_x) )
y = Variable( torch.FloatTensor(l_y) )
class Net(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__()
self.to_hidden = torch.nn.Linear(n_feature, n_hidden)
self.to_output = torch.nn.Linear(n_hidden, n_output)
def forward(self, x):
x = self.to_hidden(x)
x = F.tanh(x) # activation function
x = self.to_output(x)
return x
net = Net(n_feature = D_in, n_hidden = H, n_output = D_out)
learning_rate = 0.01
optimizer = torch.optim.Adam( net.parameters() , lr=learning_rate )
for t in range(1000):
y_pred = net(x)
loss = (y_pred - y).pow(2).sum()
if t<10 or t%100==1: print(t, loss.data[0])
loss.backward()
optimizer.step()
optimizer.zero_grad()
t = [ [math.pi] ]
print( str(t) +" -> "+ str( net( Variable(torch.FloatTensor( t )) ) ) )
t = [ [math.pi/2] ]
print( str(t) +" -> "+ str( net( Variable(torch.FloatTensor( t )) ) ) )
Unfortunately, while this code works properly, it does not solve the matter of making the original, more "low level" code work as expected (e.g. introducing the bias).
Following up on #jdhao's comment - this is a super-simple PyTorch model that computes exactly what you want:
class LinearWithInputBias(nn.Linear):
def __init__(self, in_features, out_features, out_bias=True, in_bias=True):
nn.Linear.__init__(self, in_features, out_features, out_bias)
if in_bias:
in_bias = torch.zeros(1, out_features)
# in_bias.normal_() # if you want it to be randomly initialized
self._out_bias = nn.Parameter(in_bias)
def forward(self, x):
out = nn.Linear.forward(self, x)
try:
out = out + self._out_bias
except AttributeError:
pass
return out
However, there's an additional bug in your code: from what I can see, you don't train it - i.e. you do not call an optimizer (like torch.optim.SGD(mod.parameters()) before you delete the gradient information by calling grad.data.zero_().

Tensorflow: 6 layer CNN: OOM (use 10Gb GPU memory)

I am using the following code for running a 6 layer CNN with 2 FC layers on top (on Tesla K-80 GPU).
Somehow, it consumes entire memory 10GB and died out of memory.I know that i can reduce the batch_size and then run , but i also want to run with 15 or 20 CNN layers.Whats wrong with the following code and why it takes all the memory? How should i run the code for 15 layers CNN.
Code:
import model
with tf.Graph().as_default() as g_train:
filenames = tf.train.match_filenames_once(FLAGS.train_dir+'*.tfrecords')
filename_queue = tf.train.string_input_producer(filenames, shuffle=True, num_epochs=FLAGS.num_epochs)
feats,labels = get_batch_input(filename_queue, batch_size=FLAGS.batch_size)
### feats size=(batch_size, 100, 50)
logits = model.inference(feats, FLAGS.batch_size)
loss = model.loss(logits, labels, feats)
tvars = tf.trainable_variables()
global_step = tf.Variable(0, name='global_step', trainable=False)
# Add to the Graph operations that train the model.
train_op = model.training(loss, tvars, global_step, FLAGS.learning_rate, FLAGS.clip_gradients)
# Add the Op to compare the logits to the labels during evaluation.
eval_correct = model.evaluation(logits, labels, feats)
summary_op = tf.merge_all_summaries()
saver = tf.train.Saver(tf.all_variables(), max_to_keep=15)
# The op for initializing the variables.
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
summary_writer = tf.train.SummaryWriter(FLAGS.model_dir,
graph=sess.graph)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
_, loss_value = sess.run([train_op, loss])
if step % 100 == 0:
print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value))
# Update the events file.
summary_str = sess.run(summary_op)
summary_writer.add_summary(summary_str, step)
if (step == 0) or (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
ckpt_model = os.path.join(FLAGS.model_dir, 'model.ckpt')
saver.save(sess, ckpt_model, global_step=step)
#saver.save(sess, FLAGS.model_dir, global_step=step)
step += 1
except tf.errors.OutOfRangeError:
print('Done training for %d epochs, %d steps.' % (FLAGS.num_epochs, step))
finally:
coord.join(threads)
sess.close()
###################### File model.py ####################
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1],
padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2,s=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, s,
s,1],padding='SAME')
def inference(feats,batch_size):
#feats size (batch_size,100,50,1) #batch_size=256
conv1_w=tf.get_variable("conv1_w", [filter_size,filter_size,1,256],initializer=tf.uniform_unit_scaling_initializer())
conv1_b=tf.get_variable("conv1_b",[256])
conv1 = conv2d(feats, conv1_w, conv1_b,2)
conv1 = maxpool2d(conv1, k=2,s=2)
### This was replicated for 6 layers and the 2 FC connected layers are added
return logits
def training(loss, train_vars, global_step, learning_rate, clip_gradients):
# Add a scalar summary for the snapshot loss.
tf.scalar_summary(loss.op.name, loss)
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, train_vars,aggregation_method=1), clip_gradients)
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.apply_gradients(zip(grads, train_vars), global_step=global_step)
return train_op
I am not too sure what the model python library is. If it is something you wrote and can change the setting in the optimizer I would suggest the following which I use in my own code
train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost, aggregation_method = tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N)
By default the aggeragetion_method is ADD_N but if you change it to EXPERIMENTAL_ACCUMULATE_N or EXPERIMENTAL_TREE this will greatly save memory. The main memory hog in these programs is that tensorflow must save the output values at every neuron so that it can compute the gradients. Changing the aggregation_method helps a lot from my experience.
Also BTW I don't think there is anything wrong with your code. I can run out of memory on small cov-nets as well.

Minimising log function in cvxpy

I am trying to simulate an exact line search experiment using CVXPY.
objective = cvx.Minimize(func(x+s*grad(x)))
s = cvx.Variable()
constraints = [ s >= 0]
prob = cvx.Problem(objective, constraints)
obj = cvx.Minimize(prob)
(cvxbook byod pg472)
the above equation is my input objective function.
def func(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
fx = c.transpose()*x - sum(np.log((b - A.transpose()* x)))
return fx
Gradient Function
def grad(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
gradient = A * (1.0/(b - A.transpose()*x)) + c
return gradient
Using this to find the t "Step Size" by minimising the objective function results in an error 'AddExpression' object has no attribute 'log'.
I am new to CVXPY and Optimization. I would be grateful if someone could guide on how to fix the errors.
Thanks
You need to use CVXPY functions, not NumPy functions. Something like this should work:
def func(x):
np.random.seed(1235813)
A = np.asmatrix(np.random.randint(-1,1, size=(n, m)))
b = np.asmatrix(np.random.randint(50,100,size=(m,1)))
c = np.asmatrix(np.random.randint(1,50,size=(n,1)))
fx = c.transpose()*x - cvxpy.sum_entries(cvxpy.log((b - A.transpose()* x)))
return fx

Resources