Adjust gamma of image using ImageSharp - imagesharp

I'm porting an existing application to .net core and I need to adjust the gamma of an image using ImageSharp.
I've tried image.Mutate(i => i.Brightness(value)); but it's not quite the same result as the original code which changes the gamma.
The original code uses imgAttribs.SetGamma(value, ColorAdjustType.Bitmap); but I can't use System.Drawing.Common as it's missing a dependency on AWS Lambda.
Is it possible to change the gamma using ImageSharp, if yes, how?

Gamma adjustment is simply a non linear adjustment of individual pixel values. You don't need a built in function to do it. Loop through the pixels and adjust each one.
Th algorithm, from memory, is something like Math.Pow(component, gamma); for each r, g, b component of a pixel.

Related

How to specify boundary condition as DIRICHLET in pcl poisson surface reconstruction?

I'm doing a job of surface reconstruction. But I met an issue :
I want to use the DIRICHLET boundary condition in poisson, but it seems that the poisson of pcl doesn't support specifying boundary condition, it just uses NEUMANN boundary condition always.
So I wonder how to use the DIRICHLET boundary condition in pcl poisson.
BTW:My goal is to calculate the volume of a container. But my pointcloud isn't watertight, so I need the algorithm to 'image' the surface of holes. CloudCompare supports specifying boundary condition, and it works well. But in pcl, the effect of NEUMANN boundary condition is terrible.
The mesh generated by pcl poisson(NEUMANN condition) like below:
The mesh generated by cloudcompare(specified DIRICHLET condition) like below:
The original PoissonRecon code is at this github repository. You can also find there prebuild executables for window command line (--bType to set the conidtion). This is available in the command line executable starting from version 9.0.
[--bType ] This integer specifies the boundary type for
the finite elements. Valid values are: 1: Free boundary constraints 2:
Dirichlet boundary constraints 3: Neumann boundary constraints The
default value for this parameter is 3 (Neumann).
CloudCompare uses version 7.
PCL (1.12.0 at the moment of this post) uses version 4 of PoissonRecon.
Open3D (0.14.1 at the moment of this post) includes a wrapper over version 12, which supports both boundary conditions. It is however hard-coded to use NEUMANN. You should be able to easily change the enum and compile a version of Open3D that uses DIRICHLET condition (never tried this myself).
Alternatively (if you can't use the original console app or recompile Open3D), you can try to work with what you've got.
You can try to identify the "imaginary faces" based on the area of the faces (smaller density - larger triangle area) and remove them. The original repository offers a SurfaceTrimmer tool (another console project) that does just that (based on density value).
Close the remaining open mesh using either some hole-closing-method or convex-hull.

Error: Required number of iterations = 1087633109 exceeds iterMax = 1e+06 ; either increase iterMax, dx, dt or reduce sigma

I am getting this error and this post telling me that I should decrease the sigma but here is the thing this code was working fine a couple of months ago. Nothing change based on the data and the code. I was wondering why this error out of blue.
And the second point, when I lower the sigma such as 13.1, it looks running (but I have been waiting for an hour).
sigma=203.9057
dimyx1=1024
A22den=density(Lnetwork,sigma,distance="path",continuous=TRUE,dimyx=dimyx1) #
About Lnetwork
Point pattern on linear network
69436 points
Linear network with 8417 vertices and 8563 lines
Enclosing window: rectangle = [143516.42, 213981.05] x [3353367, 3399153] units
Error: Required number of iterations = 1087633109 exceeds iterMax = 1e+06 ; either increase iterMax, dx, dt or reduce sigma
This is a question about the spatstat package.
The code for handling data on a linear network is still under active development. It has changed in recent public releases of spatstat, and has changed again in the development version. You need to specify exactly which version you are using.
The error report says that the required number of iterations of the algorithm is too large. This occurs because either the smoothing bandwidth sigma is too large, or the spacing dx between sample points along the network is too small. The number of iterations is proportional to (sigma/dx)^2 in most cases.
First, check that the value of sigma is physically reasonable.
Normally you shouldn't have to worry about the algorithm parameter dx because it is determined automatically by default. However, it's possible that your data are causing the code to choose a very small value of dx.
The internal code which automatically determines the spacing dx of sample points along the network has been changed recently, in order to fix several bugs.
I suggest that you specify the algorithm parameters manually. See the help file for densityHeat for information on how to control the spacings. Setting the parameters manually will also ensure greater consistency of the results between different versions of the software.
The quickest solution is to set finespacing=FALSE. This is not the best solution because it still uses some of the automatic rules which may be giving problems. Please read the help file to understand what that does.
Did you update spatstat since this last worked? Probably the internal code for determining spacing on the network etc. changed a bit. The actual computations are done by the function densityHeat(), and you can see how to manually set spacing etc. in its help file.

Taking advantage of Julia's integration abilities

One of the main reasons I wanted to use Julia for my project is because of its speed, especially for calculating integrals.
I would like to integrate a 1-d function f(x) over some interval [a,b]. In general Julia's quadgk function would be a fast and accurate solution. However, I do not have the function f(x), but only its values f(xi) for a discrete set of points xi in [a,b], stored in an array. The xi's are regularly spaced, and I can get the spacing to be however small I like.
Naively, I could simply define a function f which interpolates using the values f(xi) and feed this to quadgk, (and make the spacing as small as possible), however then I won't know what my error is, which is a shame because QuadGK tells you the error in its estimation.
Another solution is to write a function myself to integrate the array (with trapezoid rule for example), but that would defeat the purpose of using Julia...
What is the easiest way to accurately integrate a function only given discrete values using Julia?
Since you only have values, not the function itself, trapezoid will be your best bet probably. The package Trapz provides this (https://github.com/francescoalemanno/Trapz.jl). However, I think it is worth seeing how easy writing a pretty good implementation yourself would be.
function trap(A)
return sum(A) - (A[begin] + A[end])/2
end
This takes 2.9ms for an array of 10 million floats. If they're Int, then 2.9ms. If they were complex numbers, it would still work (and take 8.9 ms)
A method like this is a good example to show how simple it can be to write pretty fast code in Julia that is still fully generic

Semi-total derivative approximation with varying finite difference steps

I recently learned about the feature of the semi-total derivative approximation. I started to use this feature with bsplines and an explicit component. My current problem is that my design variables are input from two different components similar to the xsdm below. As far as I see it is not possible to set up different finite difference steps for different design variables. So looking at the xsdm again the control points, x and z should have identical FD steps i.e.
model.approx_totals(step=1)
works but
model.approx_totals(step=np.ones(5))
won't work. I guess, one remedy is to use the relative step size but some of my input bounds are varying from 0 to xx so maybe the relative step size is not the best. Is there a way to feed in FD steps as a vector or something similar to ;
for out in outputs:
for dep,fdstep in zip(inputs,inputsteps):
self.declare_partials(of=out,wrt=dep,method='fd',step=fdstep, form='central')
As of OpenMDAO V2.4, you don't have the ability to set per-variable FD step sizes when using approx_totals. The best option is just to use relative step sizes.

Why use image1d_buffer_t? Does it support normal samplers?

I am trying to understand the image1d_buffer_t type in OpenCL. From what I can tell, it is an 1D image made from a Buffer. The advantage over an image not made from a buffer it that the buffer image can usually be much larger (it does depend on the hardware, but the min size per this page is larger). Am I correct that you cannot use the linear interpolation of a sampler however? I am looking here.
So why even use the image rather than just a buffer?
Yes, you are correct that you can only use the sampler-less read functions with the image1d_buffer_t type, and therefore cannot make use of linear interpolation or the edge-handling features.
It's a minor convenience, but when using the image read/write functions you have the ability to change the data-type used to store the pixel values without having to change your kernel code. Similarly, you have the (sampler-less) read_imagef function, which will normalise the pixel value for you (and the corresponding write_imagef function).

Resources