TensorFlow: Take L2 norm over multiple dimensions - multidimensional-array

I have a TensorFlow placeholder with 4 dimensions representing a batch of images. Each image is 32 x 32 pixels, and each pixel has 3 color channels. The first dimensions represents the number of images.
X = tf.placeholder(tf.float32, [None, 32, 32, 3])
For each image, I would like to take the L2 norm of all the image's pixels. Thus, the output should be a tensor with one dimension (i.e. one value per image). The tf.norm() (documentation) accepts an axis parameter, but it only lets me specify up to two axes over which to take the norm, when I would like to take the norm over axes 1, 2, and 3. How do I do this?
n = tf.norm(X, ord=2, axis=0) # n.get_shape() is (?, ?, 3), not (?)
n = tf.norm(X, ord=2, axis=[1,2,3]) # ValueError

You do not need flattening which was suggested in the other answer. If you will carefully read documentation, you would see:
axis: If axis is None (the default), the input is considered a vector
and a single vector norm is computed over the entire set of values in
the tensor, i.e. norm(tensor, ord=ord) is equivalent to
norm(reshape(tensor, [-1]), ord=ord)
Example:
import tensorflow as tf
import numpy as np
c = tf.constant(np.random.rand(3, 2, 3, 6))
d = tf.norm(c, ord=2)
with tf.Session() as sess:
print sess.run(d)

I tried Salvador's answer but it looks like that returns one number for the whole minibatch instead of one number per image. So it looks like we may be stuck with doing the norm per dimension.
import tensorflow as tf
import numpy as np
batch = tf.constant(np.random.rand(3, 2, 3, 6))
x = tf.norm(batch, axis=3)
x = tf.norm(x, axis=2)
x = tf.norm(x, axis=1)
with tf.Session() as sess:
result = sess.run(x)
print(result)
This might introduce a small amount of numerical instability but in theory it's the same as taking the norm of the whole image at once.
You might also think about only taking the norm over the x and y axes so that you get one norm per channel. There's a reason why that's supported by tensorflow and this isn't.

You can compute the L2-norm by yourself like this:
tf.sqrt(tf.reduce_sum(tf.pow(images,2), axis=(1,2,3)))

Related

R - spatstat: Calculate density for a new point

Is it possible to use spatstat to estimate the intensity function for a give ppp object and calculate its value considering a new point? For example, can I evaluate D at new_point:
# packages
library(spatstat)
# define a random point within Window(swedishpines)
new_point <- ppp(x = 45, y = 45, window = Window(swedishpines))
# estimate density
(D <- density(swedishpines))
#> real-valued pixel image
#> 128 x 128 pixel array (ny, nx)
#> enclosing rectangle: [0, 96] x [0, 100] units (one unit = 0.1 metres)
Created on 2021-03-30 by the reprex package (v1.0.0)
I was thinking that maybe I can superimpose() the two ppp objects (i.e. swedishpines and new_point) and then run density setting at = "points" and weights = c(rep(1, points(swedishpines)), 0) but I'm not sure if that's the suggested approach (and I'm not sure if the appended point is ignored during the estimation process).
I know that it may sound like a trivial question, but I read some docs and didn't find an answer or a solution.
There are two ways to do this.
The first is simply to take the pixel image of intensity, and extract the pixel values at the desired locations using [:
D <- density(swedishpines)
v <- D[new_points]
See the help for density.ppp and [.im.
The other way is to use densityfun:
f <- densityfun(swedishpines)
v <- f(new_points)
See the help for densityfun.ppp
The first route is more efficient and the second way is more accurate.
Technical issue: if some of the new_points could lie outside the window of swedishpines then the value at these points is (mathematically) undefined. Both of the methods described above will simply ignore such points, and the resulting vector v will be shorter than the number of new points. If you need to handle this continengcy, the easiest way is to use D[new_points, drop=FALSE] which returns NA values for such locations.

Average every n element of axis 1 in a 3D numpy ndarray

I have a 3D numpy of shape (900,10,54).
And I want to average the values of every two elements into one, for axis 1.
Expected outcome would have shape: (900,5,54).
One way to do this:
This uses numpy array slicing to achieve this.
import numpy as np
x = np.random.rand(900, 10, 64)
y = (x[:, ::2, :] + x[:, 1::2, :]) / 2
Another approach:
If you have a variable number of consecutive elements in axis = 1 that you want to sum (which was 2 above), you can use reshape and mean to achieve this.
n = 2
y = x.reshape(x.shape[0], x.shape[1] // n, n, x.shape[2]) # shape = (900, 5, 2, 64)
y = y.mean(axis = 2)
This sums consecutive n rows for each of inner matrices in your 3D array x.

Plot of function, DomainError. Exponentiation yielding a complex result requires a complex argument

Background
I read here that newton method fails on function x^(1/3) when it's inital step is 1. I am tring to test it in julia jupyter notebook.
I want to print a plot of function x^(1/3)
then I want to run code
f = x->x^(1/3)
D(f) = x->ForwardDiff.derivative(f, float(x))
x = find_zero((f, D(f)),1, Roots.Newton(),verbose=true)
Problem:
How to print chart of function x^(1/3) in range eg.(-1,1)
I tried
f = x->x^(1/3)
plot(f,-1,1)
I got
I changed code to
f = x->(x+0im)^(1/3)
plot(f,-1,1)
I got
I want my plot to look like a plot of x^(1/3) in google
However I can not print more than a half of it
That's because x^(1/3) does not always return a real (as in numbers) result or the real cube root of x. For negative numbers, the exponentiation function with some powers like (1/3 or 1.254 and I suppose all non-integers) will return a Complex. For type-stability requirements in Julia, this operation applied to a negative Real gives a DomainError. This behavior is also noted in Frequently Asked Questions section of Julia manual.
julia> (-1)^(1/3)
ERROR: DomainError with -1.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
julia> Complex(-1)^(1/3)
0.5 + 0.8660254037844386im
Note that The behavior of returning a complex number for exponentiation of negative values is not really different than, say, MATLAB's behavior
>>> (-1)^(1/3)
ans =
0.5000 + 0.8660i
What you want, however, is to plot the real cube root.
You can go with
plot(x -> x < 0 ? -(-x)^(1//3) : x^(1//3), -1, 1)
to enforce real cube root or use the built-in cbrt function for that instead.
plot(cbrt, -1, 1)
It also has an alias ∛.
plot(∛, -1, 1)
F(x) is an odd function, you just use [0 1] as input variable.
The plot on [-1 0] is deducted as follow
The code is below
import numpy as np
import matplotlib.pyplot as plt
# Function f
f = lambda x: x**(1/3)
fig, ax = plt.subplots()
x1 = np.linspace(0, 1, num = 100)
x2 = np.linspace(-1, 0, num = 100)
ax.plot(x1, f(x1))
ax.plot(x2, -f(x1[::-1]))
ax.axhline(y=0, color='k')
ax.axvline(x=0, color='k')
plt.show()
Plot
That Google plot makes no sense to me. For x > 0 it's ok, but for negative values of x the correct result is complex, and the Google plot appears to be showing the negative of the absolute value, which is strange.
Below you can see the output from Matlab, which is less fussy about types than Julia. As you can see it does not agree with your plot.
From the plot you can see that positive x values give a real-valued answer, while negative x give a complex-valued answer. The reason Julia errors for negative inputs, is that they are very concerned with type stability. Having the output type of a function depend on the input value would cause a type instability, which harms performance. This is less of a concern for Matlab or Python, etc.
If you want a plot similar the above in Julia, you can define your function like this:
f = x -> sign(x) * abs(complex(x)^(1/3))
Edit: Actually, a better and faster version is
f = x -> sign(x) * abs(x)^(1/3)
Yeah, it looks awkward, but that's because you want a really strange plot, which imho makes no sense for the function x^(1/3).

Dimensions of fractals: boxing count, hausdorff, packing in R^n space

I would like to calculate dimensions of fractal written as a n-dimensional array of 0s and 1s. It includes boxing count, hausdorff and packing dimension.
I have only idea how to code boxing count dimensions (just counting 1's in n-dimensional matrix and then use this formula:
boxing_count=-log(v)/log(n);
where n-number of 1's and n-space dimension (R^n)
This approach simulate counting minimal resolution boxes 1 x 1 x ... x 1 so numerical it is like limit eps->0. What do you think about this solution?
Do you have any idea (or maybe code) for calculating hausdorff or packing dimension?
The Hausdorff and packing dimension are purely mathematical tools based in measure theory. They have wonderful properties in that context but are not well suited for experimentation. In short, there is no reason to expect that you can estimate their values based on a single matrix approximation to some set.
Box counting dimension, by contrast, is well suited for numerical investigation. Specifically, let N(e) denote the number of squares of side length e required to cover your fractal set. As you seem to know, the box counting dimension of your set is the limit as e->0 of
log(N(e))/log(1/e)
However, I don't think that just choosing the smallest available value of e is generally a good idea. The standard interpretation in the physics literature, as I understand it, is to presume that the relationship between N(e) and e should be maintained over a broad range of values. A standard way to compute the box-counting dimension is compute N(e) for some choices of e chosen from a sequence that tends geometrically to zero. We then fit a line to the points in a log-log plot of N(e) versus 1/e The box-counting dimension should be approximately the slope of that line.
Example
As a concrete example, the following Python code generates a binary matrix that describes a fractal structure.
import numpy as np
size = 1024
first_row = np.zeros(size, dtype=int)
first_row[int(size/2)-1] = 1
rows = np.zeros((int(size/2),size),dtype=int)
rows[0] = first_row
for i in range(1,int(size/2)):
rows[i] = (np.roll(rows[i-1],-1) + rows[i-1] + np.roll(rows[i-1],1)) % 2
m = int(np.log(size)/np.log(2))
rows = rows[0:2**(m-1),0:2**m]
We can view the fractal structure by simply interpreting each 1 as a black pixel and each zero as white pixel.
import matplotlib.pyplot as plt
plt.matshow(rows, cmap = plt.cm.binary)
This matrix makes a nice test since it can be shown that there is an actual limiting object whose fractal dimension is log(1+sqrt(5))/log(2) or approximately 1.694, yet it's complicated enough to make the box counting estimate a little tricky.
Now, this matrix is 512 rows by 1024 columns; it decomposes naturally into 2 matrices that are 512 by 512. Each of those decomposes naturally into 4 matrices that are 256 by 256, etc. For each such decomposition, we need to count the number of sub matrices that have at least one non-zero element. We can perform this analysis as follows:
cnts = []
for lev in range(m):
block_size = 2**lev
cnt = 0
for j in range(int(size/(2*block_size))):
for i in range(int(size/block_size)):
cnt = cnt + rows[j*block_size:(j+1)*block_size, i*block_size:(i+1)*block_size].any()
cnts.append(cnt)
data = np.array([(2**(m-(k+1)),cnts[k]) for k in range(m)])
data
# Out:
# array([[ 512, 45568],
# [ 256, 22784],
# [ 128, 7040],
# [ 64, 2176],
# [ 32, 672],
# [ 16, 208],
# [ 8, 64],
# [ 4, 20],
# [ 2, 6],
# [ 1, 2]])
Now, your idea is to simply compute log(45568)/log(512) or approximately 1.7195, which is not too bad. I'm recommending that we examine a log-log plot of this data.
xs = np.log(data[:,0])
ys = np.log(data[:,1])
plt.plot(xs,ys, 'o')
This indeed looks close to linear, indicating that we might expect our box-counting technique to work reasonably well. First, though, it might be reasonable to exclude the one point that appears to be an outlier. In fact, that's one of the desirable characteristics of this approach. Here's how to do so:
plt.plot(xs,ys, 'o')
xs = xs[1:]
ys = ys[1:]
A = np.vstack([xs, np.ones(len(xs))]).T
m,b = np.linalg.lstsq(A, ys)[0]
def line(x): return m*x+b
ys = line(xs)
plt.plot(xs,ys)
m
# Out: 1.6902585379630133
Well, the result looks pretty good. In particular, this is a definitive example that this approach can work better than the simple idea of using just one data point. In fairness, though, it's not hard to find examples where the simple approach works better. Also, this set is regular enough that we get some nice results. Generally, one can't really expect box-counting computations to be too reliable.

Scipy - data interpolation from one irregular grid to another irregular spaced grid

I am struggling with the interpolation between two grids, and I couldn't find an appropriate solution for my problem.
I have 2 different 2D grids, of which the node points are defined by their X and Y coordinates. The grid itself is not rectangular, but forms more or less a parallelogram (so the X-coordinate for (i,j) is not the same as (i,j+1), and the Y coordinate of (i,j) is different from the Y coordinate of (i+1,j).
Both grids have a 37*5 shape and they overlap almost entirely.
For the first grid I have for each point the X-coordinate, the Y-coordinate and a pressure value. Now I would like to interpolate this pressure distribution of the first grid on the second grid (of which also X and Y are known for each point.
I tried different interpolation methods, but my end result was never correct due to the irregular distribution of my grid points.
Functions as interp2d or griddata require as input a 1D array, but if I do this, the interpolated solution is wrong (even if I interpolate the pressure values from the original grid again on the original grid, the new pressure values are miles away from the original values.
For 1D interpolation on different irregular grids I use:
def interpolate(X, Y, xNew):
if xNew<X[0]:
print 'Interp Warning :', xNew,'is under the interval [',X[0],',',X[-1],']'
yNew = Y[0]
elif xNew>X[-1]:
print 'Interp Warning :', xNew,'is above the interval [',X[0],',',X[-1],']'
yNew = Y[-1]
elif xNew == X[-1] : yNew = Y[-1]
else:
ind = numpy.argmax(numpy.bitwise_and(X[:-1]<=xNew,X[1:]>xNew))
yNew = Y[ind] + ((xNew-X[ind])/(X[ind+1]-X[ind]))*(Y[ind+1]-Y[ind])
return yNew
but for 2D I thought griddata would be easier to use. Does anyone have experience with an interpolation where my input is a 2D array for the mesh and for the data?
Have another look at interp2d. http://docs.scipy.org/scipy/docs/scipy.interpolate.interpolate.interp2d/#scipy-interpolate-interp2d
Note the second example in the 'x,y' section under 'Parameters'. 'x' and 'y' are 1-D in a loose sense but they can be flattened arrays.
Should be something like this:
f = scipy.interpolate.interp2d([0.25, 0.5, 0.27, 0.58], [0.4, 0.8, 0.42,0.83], [3, 4, 5, 6])
znew = f(.25,.4)
print znew
[ 3.]
znew = f(.26,.41) # midway between (0.25,0.4,3) and (0.27,0.42,5)
print znew
[ 4.01945345] # Should be 4 - close enough?
I would have thought you could pass flattened 'xnew' and 'ynew' arrays to 'f()' but I couldn't get that to work. The 'f()' function would accept the row, column syntax though, which isn't useful to you. Because of this limitation with 'f()' you will have to evaluate 'znew' as part of a loop - might should look at nditer for that. Make sure also that it does what you want when '(xnew,ynew)' is outside of the '(x,y)' domain.

Resources