rotate a pointcloud in z axis - point-cloud-library

I am trying to rotate a pcd but I get the following error, how do I fix the same -
import open3d as o3d
import numpy as np
xyz = o3d.io.read_point_cloud("data.pcd")
xyz = xyz.rotate(xyz.get_rotation_matrix_from_xyz((0.7 * np.pi, 0, 0.6 * np.pi)),center=True)
Error -
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: get_rotation_matrix_from_xyz(): incompatible function arguments. The following argument types are supported:
1. (rotation: numpy.ndarray[float64[3, 1]]) -> numpy.ndarray[float64[3, 3]]
Invoked with: array([[-0.30901699, -0.95105652, 0. ],
[-0.55901699, 0.18163563, -0.80901699],
[ 0.76942088, -0.25 , -0.58778525]]); kwargs: center=True
how do i fix the same

The center argument is not boolean, but should describe the rotation center (see docs):
center (numpy.ndarray[float64[3, 1]]) – Rotation center used for transformation
This would rotate around the origin (0,0,0):
import open3d as o3d
import numpy as np
xyz = o3d.io.read_point_cloud("data.pcd")
R = xyz.get_rotation_matrix_from_xyz((0.7 * np.pi, 0, 0.6 * np.pi))
xyz = xyz.rotate(R, center=(0,0,0))

Related

NetworkX: draw_networkx not drawing labels:, draw results in'_AxesStack' object is not callable

Trying to draw two nodes graph with latest version of networkx-3.0 and latest version of matplotlib-3.6.3. Drawing does not show labels on the plot:
import networkx as nx
import matplotlib.pyplot as plt
G=nx.Graph()
G.add_node(1, text='foo')
G.add_node(2, text='bar')
G.add_edge(1,2)
print("Node labels: ", nx.get_node_attributes(G, 'text'))
nx.draw_networkx(G, with_labels=True)
>> Node labels: {1: 'foo', 2: 'bar'}
Shows graph without labels. Why?
And this results in error:
nx.draw(G)
plt.show()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_16652\3657615245.py in <module>
----> 1 nx.draw(G)
2 plt.show()
~\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py in draw(G, pos, ax, **kwds)
111 cf.set_facecolor("w")
112 if ax is None:
--> 113 if cf._axstack() is None:
114 ax = cf.add_axes((0, 0, 1, 1))
115 else:
TypeError: '_AxesStack' object is not callable
<Figure size 640x480 with 0 Axes>
To answer the first question: the nx graph doesn't "know" that you want to look at its text attribute for the label, so it just goes with the initial node name.
Instead, you could pass a relabeled graph into the draw function
import networkx as nx
import matplotlib.pyplot as plt
G=nx.Graph()
G.add_node(1, text='foo')
G.add_node(2, text='bar')
G.add_edge(1,2)
print("Node labels: ", nx.get_node_attributes(G, 'text'))
nx.draw_networkx(nx.relabel_nodes(G, nx.get_node_attributes(G, 'text')),
with_labels=True, node_color = 'orange')
plt.show()
The result:

What problems can lead to a CuDNNError with ConvolutionND

I am using three-dimensional convolution links (with ConvolutionND) in my chain.
The forward computation run smoothly (I checked intermediate result shapes to be sure I understood correctly the meaning of the parameters of convolution_nd), but during the backward a CuDNNError is raised with the message CUDNN_STATUS_NOT_SUPPORTED.
The cover_all parameter of ConvolutionND as its default value of False, so from the doc I don't see what can be the cause of the error.
Here is how I defind one of the convolution layers :
self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(self.GPU_1_ID)
And the call stack is
File "chainer/function_node.py", line 548, in backward_accumulate
gxs = self.backward(target_input_indexes, grad_outputs)
File "chainer/functions/connection/convolution_nd.py", line 118, in backward
gy, W, stride=self.stride, pad=self.pad, outsize=x_shape)
File "chainer/functions/connection/deconvolution_nd.py", line 310, in deconvolution_nd
y, = func.apply(args)
File chainer/function_node.py", line 258, in apply
outputs = self.forward(in_data)
File "chainer/functions/connection/deconvolution_nd.py", line 128, in forward
return self._forward_cudnn(x, W, b)
File "chainer/functions/connection/deconvolution_nd.py", line 105, in _forward_cudnn
tensor_core=tensor_core)
File "cupy/cudnn.pyx", line 881, in cupy.cudnn.convolution_backward_data
File "cupy/cuda/cudnn.pyx", line 975, in cupy.cuda.cudnn.convolutionBackwardData_v3
File "cupy/cuda/cudnn.pyx", line 461, in cupy.cuda.cudnn.check_status
cupy.cuda.cudnn.CuDNNError: CUDNN_STATUS_NOT_SUPPORTED
So are there special points to take care of when using ConvolutionND ?
A failing code is for instance :
import chainer
from chainer import functions as F
from chainer import links as L
from chainer.backends import cuda
import numpy as np
import cupy as cp
chainer.global_config.cudnn_deterministic = False
NB_MASKS = 60
NB_FCN = 3
NB_CLASS = 17
class MFEChain(chainer.Chain):
"""docstring for Wavelphasenet."""
def __init__(self,
FCN_Dim,
gpu_ids=None):
super(MFEChain, self).__init__()
self.GPU_0_ID, self.GPU_1_ID = (0, 1) if gpu_ids is None else gpu_ids
with self.init_scope():
self.conv1 = chainer.links.ConvolutionND(3, 1, 4, (3, 3, 3)).to_gpu(
self.GPU_1_ID
)
def __call__(self, inputs):
### Pad input ###
processed_sequences = []
for convolved in inputs:
## Transform to sequences)
copy = convolved if self.GPU_0_ID == self.GPU_1_ID else F.copy(convolved, self.GPU_1_ID)
processed_sequences.append(copy)
reprocessed_sequences = []
with cuda.get_device(self.GPU_1_ID):
for convolved in processed_sequences:
convolved = F.expand_dims(convolved, 0)
convolved = F.expand_dims(convolved, 0)
convolved = self.conv1(convolved)
reprocessed_sequences.append(convolved)
states = F.vstack(reprocessed_sequences)
logits = states
ret_logits = logits if self.GPU_0_ID == self.GPU_1_ID else F.copy(logits, self.GPU_0_ID)
return ret_logits
def mfe_test():
mfe = MFEChain(150)
inputs = list(
chainer.Variable(
cp.random.randn(
NB_MASKS,
11,
in_len,
dtype=cp.float32
)
) for in_len in [53248]
)
val = mfe(inputs)
grad = cp.ones(val.shape, dtype=cp.float32)
val.grad = grad
val.backward()
for i in inputs:
print(i.grad)
if __name__ == "__main__":
mfe_test()
cupy.cuda.cudnn.convolutionBackwardData_v3 is incompatible with some specific parameters, as described in an issue in official github.
Unfortunately, the issue only dealt with deconvolution_2d.py (not deconvolution_nd.py), therefore the decision-making about whether cudnn is used or not failed in your case, I guess.
you can check your parameter by confirming
check whether dilation parameter (!=1) or group parameter (!=1) is passed to the convolution.
print chainer.config.cudnn_deterministic, configuration.config.autotune, and configuration.config.use_cudnn_tensor_core.
Further support may be obtained by raising an issue in the official github.
The code you showed is much complicated.
To clarify the problem, the code below would help.
from chainer import Variable, Chain
from chainer import links as L
from chainer import functions as F
import numpy as np
from six import print_
batch_size = 1
in_channel = 1
out_channel = 1
class MyLink(Chain):
def __init__(self):
super(MyLink, self).__init__()
with self.init_scope():
self.conv = L.ConvolutionND(3, 1, 1, (3, 3, 3), nobias=True, initialW=np.ones((in_channel, out_channel, 3, 3, 3)))
def __call__(self, x):
return F.sum(self.conv(x))
if __name__ == "__main__":
my_link = MyLink()
my_link.to_gpu(0)
batch = Variable(np.ones((batch_size, in_channel, 3, 3, 3)))
batch.to_gpu(0)
loss = my_link(batch)
loss.backward()
print_(batch.grad)

Check partial derivatives and finite differences error

In relation with my previous question Scaled paraboloid and derivatives checking, I see that you fixed related to running the problem once. I wanted to try but I still have a problem with the derivatives checking and finite differences showed in the following code:
""" Unconstrained optimization of the scaled paraboloid component."""
from __future__ import print_function
import sys
import numpy as np
from openmdao.api import IndepVarComp, Component, Problem, Group, ScipyOptimizer
class Paraboloid(Component):
def __init__(self):
super(Paraboloid, self).__init__()
self.add_param('X', val=np.array([0.0, 0.0]))
self.add_output('f_xy', val=0.0)
def solve_nonlinear(self, params, unknowns, resids):
X = params['X']
x = X[0]
y = X[1]
unknowns['f_xy'] = (1000.*x-3.)**2 + (1000.*x)*(0.01*y) + (0.01*y+4.)**2 - 3.
def linearize(self, params, unknowns, resids):
""" Jacobian for our paraboloid."""
X = params['X']
J = {}
x = X[0]
y = X[1]
J['f_xy', 'X'] = np.array([[ 2000000.0*x - 6000.0 + 10.0*y,
0.0002*y + 0.08 + 10.0*x]])
return J
if __name__ == "__main__":
top = Problem()
root = top.root = Group()
#root.fd_options['force_fd'] = True # Error if uncommented
root.add('p1', IndepVarComp('X', np.array([3.0, -4.0])))
root.add('p', Paraboloid())
root.connect('p1.X', 'p.X')
top.driver = ScipyOptimizer()
top.driver.options['optimizer'] = 'SLSQP'
top.driver.add_desvar('p1.X',
lower=np.array([-1000.0, -1000.0]),
upper=np.array([1000.0, 1000.0]),
scaler=np.array([1000., 0.001]))
top.driver.add_objective('p.f_xy')
top.setup()
top.check_partial_derivatives()
top.run()
top.check_partial_derivatives()
print('\n')
print('Minimum of %f found at (%s)' % (top['p.f_xy'], top['p.X']))
First check works fine but the second check_partial_derivatives gives weird results for FD :
[...]
Partial Derivatives Check
----------------
Component: 'p'
----------------
p: 'f_xy' wrt 'X'
Forward Magnitude : 1.771706e-04
Reverse Magnitude : 1.771706e-04
Fd Magnitude : 9.998228e-01
Absolute Error (Jfor - Jfd) : 1.000000e+00
Absolute Error (Jrev - Jfd) : 1.000000e+00
Absolute Error (Jfor - Jrev): 0.000000e+00
Relative Error (Jfor - Jfd) : 1.000177e+00
Relative Error (Jrev - Jfd) : 1.000177e+00
Relative Error (Jfor - Jrev): 0.000000e+00
Raw Forward Derivative (Jfor)
[[ -1.77170624e-04 -8.89040341e-10]]
Raw Reverse Derivative (Jrev)
[[ -1.77170624e-04 -8.89040341e-10]]
Raw FD Derivative (Jfd)
[[ 0.99982282 0. ]]
Minimum of -27.333333 found at ([ 6.66666658e-03 -7.33333333e+02])
And (may be not related) when I try to set root.fd_options['force_fd'] = True (just to see), I get an error during the first check :
Partial Derivatives Check
----------------
Component: 'p'
----------------
Traceback (most recent call last):
File "C:\Program Files (x86)\Wing IDE 101 5.0\src\debug\tserver\_sandbox.py", line 59, in
File "d:\rlafage\OpenMDAO\OpenMDAO\openmdao\core\problem.py", line 1827, in check_partial_derivatives
u_size = np.size(dunknowns[u_name])
File "d:\rlafage\OpenMDAO\OpenMDAO\openmdao\core\vec_wrapper.py", line 398, in __getitem__
return self._dat[name].get()
File "d:\rlafage\OpenMDAO\OpenMDAO\openmdao\core\vec_wrapper.py", line 223, in _get_scalar
return self.val[0]
IndexError: index 0 is out of bounds for axis 0 with size 0
I work with OpenMDAO HEAD (d1e12d4).
This is just a stepsize problem for that finite difference. The 2nd FD occurs at a different point (the optimum), and it must be more sensitive at that point.
I tried with central difference
top.root.p.fd_options['form'] = 'central'
And got much better results.
----------------
Component: 'p'
----------------
p: 'f_xy' wrt 'X'
Forward Magnitude : 1.771706e-04
Reverse Magnitude : 1.771706e-04
Fd Magnitude : 1.771738e-04
The exception when you set 'fd' is a real bug related to the scaler on the des_var being an array. Thanks for the report on that; we'll get a story up to fix it.

RuntimeWarning: invalid value encountered in true_divide

I have to make a program using NLMS method, to minimize the noise in ECG signal.
from __future__ import division
from numpy import *
import numpy as np
#import matplotlib as plt
import matplotlib.pyplot as plt
import os
clear = lambda: os.system('cls')
clear()
meu=1e-05
file=open('ecg.txt','r') #this file i attached at below
data=file.readlines()
x=data[:1000]
xx=data[1001:2001]
N=len(x)
NN=len(xx)
t=np.zeros((N,1),float)
X=np.zeros((4,1),float)
Z=np.array([0,0,0,0],float)
w=random.rand(4,1)
y=np.zeros((N,1),float)
yp=np.zeros((N,1),float)
e=np.zeros((N,1))
j=np.zeros((N,1),float)
for n in range(len(x)):
X[1:len(X)-1]=X[0:len(X)-2]
X[0]=x[n]
sum=0
for k in range(len(X)):
sum+=(w[k].T*X[k])
y[n]=sum
e[n]=subtract(X[0],y[n])
j[n]=e[n]*e[n]
for l in range(len(X)):
w[l] = w[l]+(meu*e[n]*X[l])/((X[l]*X[l])*(X[l]*X[l])); #when division takes place it give error
MSE=mean(j,1)
plt.plot(10 * log10(MSE))
plt.title('MSE')
plt.show()
#print(' output\n')
#print(y,'\n')
#print(' data\n')
#print(x,'\n')
print(w)
#y.astype(float)
#pp=np.int16(y)
plt.plot(x)
plt.yscale('log')
plt.title('x')
plt.show()
plt.plot(y)
plt.yscale('log')
plt.title('y')
plt.show()
plt.plot(w)
plt.yscale('log')
plt.title('w')
plt.show()
plt.plot(e)
#plt.yscale('log')
plt.title('e')
plt.show()
plt.plot(e)
plt.plot(x)
plt.plot(y)
plt.show()
plt.plot(yp)
plt.title('yp')
plt.show()
#plt.plot(t)
#plt.yscale('log')
#plt.show()
I keep getting this error:
Warning (from warnings module):
w[l] = w[l]+(meu*e[n]*X[l])/((X[l]*X[l])*(X[l]*X[l]));
RuntimeWarning: invalid value encountered in true_divide
Result:
>>w
array([[ 0.86035037],
[ 0.35119551],
[ 0.40570589],
[ nan]])
>>X
array([[ 0.19258605],
[ 0.19442064],
[ 0.19243968],
[ 0. ]])
>>w[l]
array([ nan])
>>X[l]
array([ 0.])
I can't figure it out, what is wrong with the code?
ECG text file:
click this link to view the file

sign recognition like hand written digits example in scikit-learn (python)

I watch out this example: http://scikit-learn.org/stable/auto_examples/plot_digits_classification.html#example-plot-digits-classification-py
on handwritten digits in scikit-learn python library.
i would like to prepare a 3d array (N * a* b) where N is my images number (75) and a* b is the matrix of an image (like in the example a 8x8 shape).
My problem is: i have signs in a different shapes for every image: (202, 230), (250, 322).. and give me
this error: ValueError: array dimensions must agree except for d_0 in this code:
#here there is the error:
grigiume = np.dstack(listagrigie)
print(grigiume.shape)
grigiume=np.rollaxis(grigiume,-1)
print(grigiume.shape)
There is a manner to resize all images in a standard size (i.e. 200x200) or a manner to have a 3d array with matrix(a,b) where a != from b and do not give me an error in this code:
data = digits.images.reshape((n_samples, -1))
classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])
My code:
import os
import glob
import numpy as np
from numpy import array
listagrigie = []
path = 'resize2/'
for infile in glob.glob( os.path.join(path, '*.jpg') ):
print("current file is: " + infile )
colorato = cv2.imread(infile)
grigiscala = cv2.cvtColor(colorato,cv2.COLOR_BGR2GRAY)
listagrigie.append(grigiscala)
print(len(listagrigie))
#here there is the error:
grigiume = np.dstack(listagrigie)
print(grigiume.shape)
grigiume=np.rollaxis(grigiume,-1)
print(grigiume.shape)
#last step
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Create a classifier: a support vector classifier
classifier = svm.SVC(gamma=0.001)
# We learn the digits on the first half of the digits
classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2])
# Now predict the value of the digit on the second half:
expected = digits.target[n_samples / 2:]
predicted = classifier.predict(data[n_samples / 2:])
print "Classification report for classifier %s:\n%s\n" % (
classifier, metrics.classification_report(expected, predicted))
print "Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)
for index, (image, prediction) in enumerate(
zip(digits.images[n_samples / 2:], predicted)[:4]):
pl.subplot(2, 4, index + 5)
pl.axis('off')
pl.imshow(image, cmap=pl.cm.gray_r, interpolation='nearest')
pl.title('Prediction: %i' % prediction)
pl.show()
You have to resize all your images to a fixed size. For instance using the Image class of PIL or Pillow:
from PIL import Image
image = Image.open("/path/to/input_image.jpeg")
image.thumbnail((200, 200), Image.ANTIALIAS)
image.save("/path/to/output_image.jpeg")
Edit: the above won't work, try instead resize:
from PIL import Image
image = Image.open("/path/to/input_image.jpeg")
image = image.resize((200, 200), Image.ANTIALIAS)
image.save("/path/to/output_image.jpeg")
Edit 2: there might be a way to preserve the aspect ratio and pad the rest with black pixels but I don't know how to do in a few PIL calls. You could use PIL.Image.thumbnail and use numpy to do the padding though.

Resources