RuntimeWarning: overflow encountered in exp error in synthesising Planck curve - runtime-error

Can anyone please correct this code?
Aim of this code is to synthesize a Planck Curve and to integrate it.
import matplotlib, sys, os, time
import matplotlib.pyplot as plt
import numpy as np
from scipy.constants import c, h, hbar, pi, k, e
from scipy.constants import c, h, hbar, pi, k, e
import scipy.integrate
f = np.linspace(10**9, 10**19, 1000)
for T in [3000., 5000., 10000., 20000., 30000.,40000.]: ## define the temperatures you
wish to inspect
#T = 9e2
L=(2*h*f**3/c**2)*1/(np.exp(h*f/(k*T)) - 1)
import scipy.integrate
print (scipy.integrate.trapz(L, f) * pi / T**4 ) # integrate by frequency, verify
Stefan-Boltzmann
#print L
plt.plot(f,L)
totalL = scipy.integrate.trapz(L, f) * pi # integrate by frequency
print ("Radiosity at T=%g K (integrated from f=%g to %g): %g W/m^2" % (T,
np.min(f),np.max(f), totalL))
print ("\t\tCompared to the radiosity of black body over entire spectrum ratio (Stefan-
Boltzmann law): %.4g" % (totalL / 5.670367e-8 /T**4))
plt.plot(f,L, label="T = %g K" % T)
## Simple axes
#plt.ylim((-0.1,1.1));
plt.yscale('log')
#plt.xlim((-0.1,1.1));
plt.xscale('linear')
## ==== Outputting ====
## Finish the plot + save
plt.xlabel(u"freq [Hz]");
plt.ylabel(u"luminance [W/sr/m$^2$]" );
plt.xlabel(u"Radiation frequency [Hz]");
plt.ylabel(u"Spectral radiosity [W/m$^2/Hz$]" );
plt.grid()
plt.legend(prop={'size':10}, loc='upper right')
plt.savefig("output.png", bbox_inches='tight')

Related

How to use GeoPandas or Fiona to find the inner part of the polygon through subtraction?

enter image description here
enter image description here
I am trying to find a way how we can get the upper part of the polygon (image 2) from the bigger polygon (image 1) using GeoPandas/Fiona functions. The other way round is quite easy by using the overlay set operation of "difference" but the way I want it, the functions/tools in GeoPandas does not work.
simply you need https://geopandas.org/en/stable/docs/user_guide/set_operations.html#
you have not provided any geometry. Have used Ukraine as larger geometry and a number of box geometries that overlap Ukraine
full example below
import shapely
import geopandas as gpd
import geopy.distance
import numpy as np
world = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
dims = (10, 10)
a, b, c, d = world.loc[world["name"].eq("Ukraine")].total_bounds
g = .1
# generate some polygons and a line that cuts through some of the polygons
gdf_grid = gpd.GeoDataFrame(
geometry=[
shapely.geometry.box(minx + g, miny + g, maxx - g, maxy - g)
for minx, maxx in zip(
np.linspace(a, c, dims[0]), np.linspace(a, c, dims[0])[1:]
)
for miny, maxy in zip(
np.linspace(b, d, dims[1]), np.linspace(b, d, dims[1])[1:]
)
],
crs="epsg:4326",
).sample(8, random_state=1)
big_without_small = world.loc[world["name"].eq("Ukraine")].overlay(gdf_grid, how="difference")
big_without_small.plot()

How to detect and calculate an angle in image? (spray)

I'm trying write a code to find a nozzle spray angle (image) after binary process. I read something about hough transform to do this but I can't apply correctly. (Calculating the angle between two lines in an image in Python and How to find angle of hough lines detected?)
For a while I have this:
import cv2
import numpy as np
import math
import matplotlib.pyplot as plt
img_grey = cv2.imread('sprayang.jpg', cv2.IMREAD_GRAYSCALE)
img = cv2.GaussianBlur (img_grey, (5,5), 0)
th = 0
max_val = 255
ret, o1 = cv2.threshold(img, th, max_val, cv2.THRESH_BINARY + cv2.THRESH_OTSU )
And the generated image is that:
Using the angle measurement tool in ImageJ I measured about 90°.
cast 2 horizontal rays (from each direction)
find first hit with your spray
let call the 2D points p0,p1 from left and p2,p3
compute angle
smaller unsigned angle between 2 vectors in 2D is defined by dot product like this:
ang = acos( dot( p1-p0 , p2-p1 ) / (|p1-p0|*|p2-p1|) )
(x1-x0)*(x2-x1) + (y1-y0)*(y2-y1)
ang = acos( ------------------------------------------------------------- )
sqrt( (x1-x0)^2 + (y1-y0)^2 ) * sqrt( (x2-x1)^2 + (y2-y1)^2 )

Bridge sampling for Brownian motion

I am producing code for bridge sampling for Brownian motion to simulate sample paths but I keep getting all zeros for my answer. I have got my code and picture algorithm below.
#Brownian bridge for GBM
Z<-rnorm(1, mean=0,sd=1)
T=1
W_[0]=0
W_[T]=sqrt(T)*Z
k=5
j<-2^(k-1)
W_=rep(NA,nt-1)
for(k in 1:K){
h=h/2
for(j in 1:j){
Z<-rnorm(1,mean=0,sd=1)
W_[2*(j-1)*h]=0.5*(W_[2*(j-1)*h]+W_[2*j*h])+sqrt(h)*Z
print(W_[2*(j-1)*h])
}
}
The algorithm is below:
If I am not mistaken with the formulas, in Python it should be something like this:
import numpy as np
import pandas as pd
# Samples a Brownian bridge in the [a,b] interval, see https://en.wikipedia.org/wiki/Brownian_bridge#General_case
def BB(
T: int = 252,
a: float = 0,
b: float = 0,
alpha: float = 0,
sigma: float = 1
) -> pd.Series:
X0 = a
X = pd.Series([X0])
t1 = 0
t2 = T
for i in np.arange(1, T):
t = i
mu = (a + (t - t1)/(t2 - t1)*(b - a))
s = ((t2 - t)*(t - t1)/(t2 - t1))
ei = np.random.normal(alpha+mu, sigma+s)
X.loc[i] = ((T - t)/T**0.5)*ei*((t/(T-t))**0.5)
XT = b
X[T] = XT
return X
Show it:
from matplotlib import pyplot as plt
plt.plot(BB(a=0,b=0, T=100))

Monitor GPU performance with nvprof does not work

I am trying to use nvprof to monitor the performance of the GPU. I would like to know the time consuming of HtoD(host to device), DtoH(device to host) and device execution.
It worked very well with a standard code from numba cuda website:
from numba import cuda
#cuda.jit
def add_kernel(x, y, out):
tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
ty = cuda.blockIdx.x # Similarly, this is the unique block ID within the 1D grid
block_size = cuda.blockDim.x # number of threads per block
grid_size = cuda.gridDim.x # number of blocks in the grid
start = tx + ty * block_size
stride = block_size * grid_size
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
if __name__ == "__main__":
import numpy as np
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
out = np.empty_like(x)
threads_per_block = 128
blocks_per_grid = 30
add_kernel[blocks_per_grid, threads_per_block](x, y, out)
print(out[:10])
Here is the out come from nvprfo:
However, when I add the usage of multiprocessing with the following code:
import multiprocessing as mp
from numba import cuda
def fun():
#cuda.jit
def add_kernel(x, y, out):
tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
ty = cuda.blockIdx.x # Similarly, this is the unique block ID within the 1D grid
block_size = cuda.blockDim.x # number of threads per block
grid_size = cuda.gridDim.x # number of blocks in the grid
start = tx + ty * block_size
stride = block_size * grid_size
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
import numpy as np
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
out = np.empty_like(x)
threads_per_block = 128
blocks_per_grid = 30
add_kernel[blocks_per_grid, threads_per_block](x, y, out)
print(out[:10])
return out
# check gpu condition
p = mp.Process(target = fun)
p.daemon = True
p.start()
p.join()
nvprof seems to monitor the process but it doesn't outcome anything though it reports that nvprof is profiling:
Furthermore, when I used Ray (a package for doing distributed computation):
if __name__ == "__main__":
import multiprocessing
def fun():
from numba import cuda
import ray
#ray.remote(num_gpus=1)
def call_ray():
#cuda.jit
def add_kernel(x, y, out):
tx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
ty = cuda.blockIdx.x # Similarly, this is the unique block ID within the 1D grid
block_size = cuda.blockDim.x # number of threads per block
grid_size = cuda.gridDim.x # number of blocks in the grid
start = tx + ty * block_size
stride = block_size * grid_size
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
import numpy as np
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
out = np.empty_like(x)
threads_per_block = 128
blocks_per_grid = 30
add_kernel[blocks_per_grid, threads_per_block](x, y, out)
print(out[:10])
return out
ray.shutdown()
ray.init(redis_address = "***")
out = ray.get(call_ray.remote())
# check gpu condition
p = multiprocessing.Process(target = fun)
p.daemon = True
p.start()
p.join()
nvprof doesn't show anything! It even doesn't show the line telling that nvprof is profiling the process (but the code is indeed executed):
Does anyone know how to figure this out? Or do I have any other choices to acquire these data for distributed computation?

Using CVXOPT to run mean variance optimisation

I see this question has been asked abc 4 years ago, but I still struggled with the answer, so reposting it. The codes are pasted below
if say hypothetically I was to make sure that asset 1 range is between 20% and 40% and asset 2 range is <30%, and say asset 2 is 10-15%, what changes do I make to G and h? i.e. what will be values for G and h in the code.
The rest is easy to understand but struggling a bit with the quadratic optimiser. Appreciate any help!
thanks,
PK
import numpy as np
import pandas as pd
from scipy import optimize
import cvxopt as opt
from cvxopt import blas, solvers
import matplotlib.pyplot as plt
np.random.seed(123)
# Turn off progress printing
solvers.options['show_progress'] = False
# Number of assets
n_assets = 4
# Number of observations
n_obs = 2000
## Generating random returns for our 4 securities
return_vec = np.random.randn(n_assets, n_obs)
def rand_weights(n):
'''
Produces n random weights that sum to 1
'''
k = np.random.rand(n)
return k / sum(k)
def random_portfolio(returns):
'''
Returns the mean and standard deviation of returns for a random portfolio
'''
p = np.asmatrix(np.mean(returns, axis=1))
w = np.asmatrix(rand_weights(returns.shape[0]))
C = np.asmatrix(np.cov(returns))
mu = w * p.T
sigma = np.sqrt(w * C * w.T)
# This recursion reduces outliers to keep plots pretty
if sigma > 2:
return random_portfolio(returns)
return mu, sigma
def optimal_portfolios(returns):
n = len(returns)
returns = np.asmatrix(returns)
N = 50000
# Creating a list of returns to optimize the risk for
mus = [10 ** (5.0 * t / N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(np.cov(returns))
pbar = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n, 1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu * S, -pbar, G, h, A, b)['x']
for mu in mus]
## Calculate the risk and returns of the frontier
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S * x)) for x in portfolios]
return returns, risks
n_portfolios = 25000
means, stds = np.column_stack([random_portfolio(return_vec) for x in range(n_portfolios)])
returns, risks = optimal_portfolios(return_vec)
Well the constraint part does not work like that;
In this case G Mat should be 4 X 4 and h should be 1 X 4 as below
If a minimum threshold is required, one can input in "h" matrix as in my example below (20%) in this case. Yet, I could not manage to add max constraint.
G = matrix([[-1.0, 0.0, 0.0, 0.0],
[0.0, -1.0, 0.0, 0.0],
[0.0, 0.0, -1.0, 0.0],
[0.0, 0.0, 0.0, -1.0]])
h = matrix([-0.2, 0.0, 0.0, 0.0])

Resources