Can I use perf_event_open to work in counting mode and sampling mode at the same time? - counting

Can I use perf_event_open to work in counting mode and sampling mode at the same time ? If I can, how to do it?

Related

Problem on QT GUI Frequency Sink GNU Radio Windows

I tried to generate a Gaussian noise signal having a standard deviation of 1 by setting the amplitude value in the noise source configuration to 1.
When I tried to display it using the QT GUI Frequency Sink, I initially expected that the resulting PSD would fluctuate around 0 dB across all frequency points. However, I have found that the PSD displayed at QT GUI Frequency Sink fluctuates around -40 dB (instead of 0 dB) over all frequency points.
From the signal processing theory's point of view, this result is clearly incorrect.
Is there a bug in my app that I don't know?
I am using GNU Radio application v3.8.2.0-57-gd71cd177 (Python 3.9.0) on windows 10.
Two things: there's a bug in your expectation:
the sum of all powers needs to be around 1, i.e. 0dB, because that's the power and thus the variance, so the individual bins need to be lower - in fact, the lower the longer your FFT is.
But there's also undocumented scaling in the frequency sink, so we can call that a GNU Radio bug, if we want. GNU Radio is aware of the problem. It's not easy to address because it's bad to have unexpected behaviour, but it's also bad to break existing applications.

Error: Required number of iterations = 1087633109 exceeds iterMax = 1e+06 ; either increase iterMax, dx, dt or reduce sigma

I am getting this error and this post telling me that I should decrease the sigma but here is the thing this code was working fine a couple of months ago. Nothing change based on the data and the code. I was wondering why this error out of blue.
And the second point, when I lower the sigma such as 13.1, it looks running (but I have been waiting for an hour).
sigma=203.9057
dimyx1=1024
A22den=density(Lnetwork,sigma,distance="path",continuous=TRUE,dimyx=dimyx1) #
About Lnetwork
Point pattern on linear network
69436 points
Linear network with 8417 vertices and 8563 lines
Enclosing window: rectangle = [143516.42, 213981.05] x [3353367, 3399153] units
Error: Required number of iterations = 1087633109 exceeds iterMax = 1e+06 ; either increase iterMax, dx, dt or reduce sigma
This is a question about the spatstat package.
The code for handling data on a linear network is still under active development. It has changed in recent public releases of spatstat, and has changed again in the development version. You need to specify exactly which version you are using.
The error report says that the required number of iterations of the algorithm is too large. This occurs because either the smoothing bandwidth sigma is too large, or the spacing dx between sample points along the network is too small. The number of iterations is proportional to (sigma/dx)^2 in most cases.
First, check that the value of sigma is physically reasonable.
Normally you shouldn't have to worry about the algorithm parameter dx because it is determined automatically by default. However, it's possible that your data are causing the code to choose a very small value of dx.
The internal code which automatically determines the spacing dx of sample points along the network has been changed recently, in order to fix several bugs.
I suggest that you specify the algorithm parameters manually. See the help file for densityHeat for information on how to control the spacings. Setting the parameters manually will also ensure greater consistency of the results between different versions of the software.
The quickest solution is to set finespacing=FALSE. This is not the best solution because it still uses some of the automatic rules which may be giving problems. Please read the help file to understand what that does.
Did you update spatstat since this last worked? Probably the internal code for determining spacing on the network etc. changed a bit. The actual computations are done by the function densityHeat(), and you can see how to manually set spacing etc. in its help file.

How to decide which mode to use for 'kaiming_normal' initialization

I have read several codes that do layer initialization using nn.init.kaiming_normal_() of PyTorch. Some codes use the fan in mode which is the default. Of the many examples, one can be found here and shown below.
init.kaiming_normal(m.weight.data, a=0, mode='fan_in')
However, sometimes I see people using the fan out mode as seen here and shown below.
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
Can someone give me some guidelines or tips to help me decide which mode to select? Further I am working on image super resolutions and denoising tasks using PyTorch and which mode will be more beneficial.
According to documentation:
Choosing 'fan_in' preserves the magnitude of the variance of the
weights in the forward pass. Choosing 'fan_out' preserves the
magnitudes in the backwards pass.
and according to Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015):
We note that it is sufficient to use either Eqn.(14) or
Eqn.(10)
where Eqn.(10) and Eqn.(14) are fan_in and fan_out appropriately. Furthermore:
This means that if the initialization properly scales the backward
signal, then this is also the case for the forward signal; and vice
versa. For all models in this paper, both forms can make them converge
so all in all it doesn't matter much but it's more about what you are after. I assume that if you suspect your backward pass might be more "chaotic" (greater variance) it is worth changing the mode to fan_out. This might happen when the loss oscillates a lot (e.g. very easy examples followed by very hard ones).
Correct choice of nonlinearity is more important, where nonlinearity is the activation you are using after the layer you are initializaing currently. Current defaults set it to leaky_relu with a=0, which is effectively the same as relu. If you are using leaky_relu you should change a to it's slope.

Change brightness of video using avconv

In contrast to ffmpeg, it does not seem to be possible to adjust brightness or other "quick" settings of videos anymore using avconv, at least a grep in the manpage for brightness did not give a single result. Gamma correction seems to be hidden in some kind of LUT-filter.
Can anyone point me to some option (in ffmpeg, those where mp,eq2,later eq) that allows me to do so?
(On a sidenote, can anyone explain why this fundamental and useful functionality has been stripped or obfuscated to the user?)
Indeed as indicated in https://www.libav.org/avconv.html#lut_002c-lutrgb_002c-lutyuv you can use the following filters to change the gamma (use it with -vf):
lutyuv=y=gammaval(0.5)
or:
lutyuv="y=2*val"
If you are willing to play with RGB or YUV values you can probably get something better using formulas as in the examples of that website. For example, to increase saturation based on the formulas in https://stackoverflow.com/a/8810735/6040014:
lutyuv="y=2*val:u=(val-128)*2+128:v=(val-128)*2+128"
And an experiment with y=(val-128)*2+128 seems to get a contrast increase (but maybe "contrast" is a technical term that should follow better formulas than this).

OpenMDAO 1.x relevance reduction

I have a component in OpenMDAO without outputs that serves to provide inputs to the rest of the group. apply_linear in that component is being called despite the fact that the output of it is not connected. Shouldn't the relevance reduction algorithm in OpenMDAO 1.x figure out that apply_linear for this method never needs to be called?
As it turns out, relevance reduction on a per-variable basis isn't turned on by default. You can turn it on with:
prob.root.ln_solver = LinearGaussSeidel()
prob.root.ln_solver.options['single_voi_relevance_reduction'] = True
This options is set to False by default because it does use more memory by allocating separate vectors for each quantity of interest (though each vector is smaller because it only contains relevant variables, but the total size may be larger.) Also, relevance-reduction is only applicable when using Linear Gauss Seidel as the top linear solver.
My reputation isn't high enough yet to leave comments, so I'm just adding another answer instead. I just wanted to mention that if you're not running under MPI, activating single_voi_relevance_reduction is essentially free. The real increase in memory use isn't due to the vectors themselves, but instead it's due to the index arrays that we store in order to transfer the data from source arrays to target arrays. We're forced to use index arrays under MPI, because PETSc requires it, but when we're not using MPI we use python slice objects to do our data transfer. Slice objects require very little memory.

Resources