Local Localisation with particle filter - mobile-robots

I am doing Local Localisation with sonar,particle filter(i.e all particles are intially with robot pose).
I have grip map of environment.When i execute algorithm in environment (where doors are closed/open),particles are not able to followup the robot.
I dont have random particles since i know the intial postion of robot exactly.
Adding random particle will change the pose of robot(i am find median of particles as robot pose).
Any idea/methods how to improve locallocalisation.
thank you.
update :
here i am using noisy motion model and then sensor model and re sampling to converge them to robot pose.My issue is how to optimise it when there is change in environment from map(mainly doors open/close and clutter inside the room)

Related

Min Max component, objective function

I would like to perform some optimizations by minimizing the maximum of a specific path variable within Dymos. or the maximum of the absolute of such a variable.
In linear programming methods, this can be done by introducing slack variables.
Do you know if this has been attempted before with Dymos, or if there was a reason not to include it?
I understand gradient based methods are not entirely suitable for these problems, though I think some "functions" can be introduced to mitigate this.
For example,
The space shuttle reentry problem from [Betts][1] used as a [test example][2] in dymos, the original source contains an example where the maximum heat flux is minimized. Such functionality could be implemented with the "loc" argument as:
phase.add_objective('q_c', loc='max')
[1]: J. Betts. Practical Methods for Optimal Control and Estimation Using Nonlinear Programming. Society for Industrial and Applied Mathematics, second edition, 2010. URL: https://epubs.siam.org/doi/abs/10.1137/1.9780898718577, arXiv:https://epubs.siam.org/doi/pdf/10.1137/1.9780898718577, doi:10.1137/1.9780898718577.
[2]: https://openmdao.github.io/dymos/examples/reentry/reentry.html
This has been done with pseudospectral methods before. Dymos currently doesn't have any direct way of implementing this, for a few reasons:
As you said, doing this naively can introduce discontinuous gradients that confuse the optimizer. When the node at which the maximum occurs switches, this tends to cause a sharp edge discontinuity in the gradient.
Since the pseudospectral methods are discrete, you cannot guarantee that the maximum will occur at a node. It's often fine to assume it does, but sometimes your requirements might demand more precision.
There are two possible ways to get around this.
The KSComp in OpenMDAO can be used as a "differentiable maximum". Add one after the trajectory, feed it the timeseries data for the output of interest, and set it up such that it returns a smooth approximation to the maximum. The KS function is a bit conservative, so it won't pick out the precise maximum, but depending on the value of the rho option it can be tuned to get pretty close.
When a more precise value of a maximum is needed, it's pretty common to set up a trajectory such that a phase ends when the maximum or minimum is reached.
If the variable whose maximum is being sought is a state, this can be done by adding a boundary constraint on the rate source for that state.
This ensures that the maximum occurs at the first or last node in the phase (depending on if its an initial or final boundary constraint). That lets you more accurately capture its value.
If the variable being sought is not a state, its possible to use the polynomials that are used for fitting states and controls in a phase to interpolate the variable of interest. By then taking the time derivative of that polynomial we can get a reasonably good approximation for its rate. The master branch of dymos has a method add_timeseries_rate_output that does this. And soon, within a few weeks hopefully, we'll add add_boundary_rate_constraint so that these interpolated rates can be easily used as boundary constraints.
In the meantime, you should be able to achieve this by adding the timeseries rate output and then manually applying the OpenMDAO method 'add_constraint' to the resulting timeseries output, using either indices=[0] or indices=[-1] to treat it as an initial or final constraint.
This is a common enough request that we'll add some documentation on how to achieve this behavior using both the KSComp approach and the boundary constraint approach.
Personally I'm not as much of a fan of KSComp because I've had trouble getting problems getting those types of objectives to converge in the past. I've used the slack variable and that has worked well. In the following example, we take a guess at the Rotor power in static analysis, and then we run a trajectory and get the actual rotor power during the mission. The objective was to minimize aircraft weight, so if you have a large amount of power in statics, that costs more weight. The constraint shown below prevents us from decreasing our updated guess of rotor power in statics below the maximum power required during the trajectory.
p.model.add_subsystem(
'static_power_check',
om.ExecComp('Power_check = Power_ODE - Power_statics',
Power_check = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_ODE = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_statics = {'value':0.0, 'units':'kW'}),
promotes_inputs=[
('Power_ODE','hop0.main_phase.timeseries.Power_R'), ('Power_statics','Power_{rotor,slack}')],
promotes_outputs=['Power_check'])
p.model.add_constraint('Power_check', upper=0, ref=1)
The constraint on the slack variable effectively helped us ensure that our slack rotor power matched the maximum rotor power during the mission. This allowed us to get the right sizes for the rotor parts (i.e. motors).

Signal to filter in R

I need to filter a signal without it losing its properties so that later this signal is inserted into an artificial neural network. I'm using the R and the signal library, I thought about using a low-pass filter or an FFT.
This is the signal to be filtered, it is about shifting pixels in a video. In the case I calculated the resultant of vectors X and Y to obtain only one value and thus generate this graph / signal:
Using the signal library and the fftfilt function, I obtained the following signal, which seems to be easier for a neural network to be trained, but I did not understand what the function is doing and if the signal properties have remained.
resulting <- fftfilt(rep(1,50)/50,resulting)
Could someone explain how this function works or suggest a better method to filter this signal.
As for the fftfilt(...) function I can tell you roughly what it does: it is an approximate finite impulse response filter implementation that uses FFT of the filter impulse response function with some padding as a window. It gets the signal's FFT within the window and filter's IR FFT, then generates the filtered signal in frequency domain by just multiplying the two results and then uses the reverse FFT to get the actual result in time domain. If your FIR filter has a huge number of coefficients (even though in many cases it is just a sign of a bad system design and should not be needed) the function works much faster than filter(Ma(...),...). With more reasonable number of coefficients (like definitely below 100) the direct and precise approach is actually faster.
As for the proper methods of filtering there are so many of them that there are full thick books on just the topic. And my personal experience in the field is a little bit skewed towards somewhat specific very low computing power microcontroller-based sensor signal DSP tricks with fixed point arithmetics, precise unity gains in a selected pass band point, power of 2 scale coefficients and staged implementations so I doubt it would help you in this case. Basically first you just need to know what result do you want to get from your filter (and do you even need to filter or maybe just something like peak detection and decimation is enough) and then just know what you are doing. From your message it is hard to guess what your "neural network" requirements are, what you think you need for it and what you actually need.

Predicting SPC (Statistical Process Control)

I will give a brief explanation to my scenario. The company mass produces components like valves/nuts/bolts etc which need to measured for dimensions (like length,radius,thickness etc) for quality purposes. As it is not feasible to inspect all the pieces, they are chosen in a batch style. Foe eg: from a batch of every 100 pieces, 5 will be randomly selected & mean of their dimensions measured & noted for drawing SPC control charts (plots mean dimension on y axis & batch number on x axis).
Even though there are a number of factors (like operator efficiency, machine/tool condition etc) which affect the quality of the product, they don't seem to be measurable.
My objective is to develop a machine learning model to predict the product dimensions of the coming batch samples(mean). This will help the operator to forecast if there is going to be any significant dimensional variation so that he can pause working & figure out potential reasons & thus prevent the wastage of the product/material.
I have some idea about R programming & machine learning techniques like decision trees/regression etc but couldn't land on a proper model for this. Mainly because I couldn't think of the independent variables for this situation. I don't have much idea about time series modelling though.
Will someone throw some insights/ideas/suggestions about how to tackle this.
I am sorry that I had to write a long story but just wanted to make things as clear as possible.
Thanks in advance.
Sreenath
Your requirement may apply with three level by steps:
1.Fundamental
Automatic apply SPC rule with machine learning, ex. identify SPC chart pattern with Nelson rule, and extend to new pattern of variation in specific process.
Nelson rules
ML system for SPC reference
2.Supplemental
Predicate Cp and SPC trend with multivariant collection and machine learning. For example, particle of smoke will impact wafer yield rate, it may earlier to found if data analysis model link SPC and worker shift arrangement
Improve SPC by PPC
3.Intelligent agent
Automatic process event through integration between SPC and reaction plan. The agent model by link SPC and FMEA and build with CEP engine in BAM architecture.
Process integration
System integration
Intelligent Agent
CEP
BAM

Neural network back propagation weight change effect on predictions

I am trying to understand how neural network can predict different outputs by learning different input/output patterns..I know that weights changes are the mode of learning...but if an input brings about weight adjustments to achieve a particular output in back propagtion algorithm.. won't this knowledge(weight updates) be knocked of when presented with a different set of input pattern...thus making the network forget what it had previously learnt..
The key to avoid "destroying" the networks current knowledge is to set the learning rate to a sufficiently low value.
Lets take a look at the mathmatics for a perceptron:
The learning rate is always specified to be < 1. This forces the backpropagation algorithm to take many small steps towards the correct setting, rather than jumping in large steps. The smaller the steps, the easier it will be to "jitter" the weight values into the perfect settings.
If, on the other hand, used a learning rate = 1, we could start to experience trouble with converging as you mentioned. A high learning rate would imply that the backpropagation should always prefer to satisfy the currently observed input pattern.
Trying to adjust the learning rate to a "perfect value" is unfortunately more of an art, than science. There are of course implementations with adaptive learning rate values, refer to this tutorial from Willamette University. Personally, I've just used a static learning rate in the range [0.03, 0.1].

Python - Clustering MFCC Vectors

I am currently doing a speaker verification project using hidden markov models no accurate results on voice signals yet, though i have tested the system to various data samples (not involved with voice).
I extracted the MFCC of the voice signals using scikits talkbox. I assumed that no parameters must be changed and that the default ones are already fit for such project. I am suspecting that my problem is within the vector quantization of the mfcc vectors. I chose kmeans as my algorithm using scipy's kmeans clustering function. I was wondering if there is a prescribed number of clusters for this kind of work. I originally set mine to 32. Sample rate of my voice files are 8000 and 22050. Oh additionally, I recorded them and manually removed the silence using Audacity.
Any suggestions?

Resources