Is a Markov chain the same as a finite state machine? - math

Is a finite state machine just an implementation of a Markov chain? What are the differences between the two?

Markov chains can be represented by finite state machines. The idea is that a Markov chain describes a process in which the transition to a state at time t+1 depends only on the state at time t. The main thing to keep in mind is that the transitions in a Markov chain are probabilistic rather than deterministic, which means that you can't always say with perfect certainty what will happen at time t+1.
The Wikipedia articles on Finite-state machines has a subsection on Finite Markov-chain processes, I'd recommend reading that for more information. Also, the Wikipedia article on Markov chains has a brief sentence describing the use of finite state machines in representing a Markov chain. That states:
A finite state machine can be used as
a representation of a Markov chain.
Assuming a sequence of independent and
identically distributed input signals
(for example, symbols from a binary
alphabet chosen by coin tosses), if
the machine is in state y at time n,
then the probability that it moves to
state x at time n + 1 depends only on
the current state.

Whilst a Markov chain is a finite state machine, it is distinguished by its transitions being stochastic, i.e. random, and described by probabilities.

The two are similar, but the other explanations here are slightly wrong. Only FINITE Markov chains can be represented by a FSM. Markov chains allow for an infinite state space. As it was pointed out, the transitions of a Markov chain are described by probabilities, but it is also important to mention that the transition probabilities can only depend on the current state. Without this restriction, it would be called a "discrete time stochastic process".

I believe this should answer your question:
https://en.wikipedia.org/wiki/Probabilistic_automaton
And, you are on to the right idea - they are almost the same, subsets, supersets and modifications depending on what adjective describes the chain or the automaton. Automata typically take an input as well, but I am sure there have been papers utilizing 'Markov-chains' with inputs.
Think gaussian distribution vs. normal distribution - same ideas different fields. Automata belong to computer science, Markov belongs to probability and statistics.

I think most of the answers are not appropriate. A Markov process is generated by a (probablistic) finite state machine, but not every process generated by a probablistic finite state machine is a Markov process. E.g. Hidden Markov Processes are basically the same as processes generated by probabilistic finite state machines, but not every Hidden Markov Process is a Markov Process.

If leaving the inner working details aside, finite state machine is like a plain value, while markov chain is like a random variable (add probability on top of the plain value). So the answer to the original question is no, they are not the same. In the probabilistic sense, Markov chain is an extension of finite state machine.

Related

Min Max component, objective function

I would like to perform some optimizations by minimizing the maximum of a specific path variable within Dymos. or the maximum of the absolute of such a variable.
In linear programming methods, this can be done by introducing slack variables.
Do you know if this has been attempted before with Dymos, or if there was a reason not to include it?
I understand gradient based methods are not entirely suitable for these problems, though I think some "functions" can be introduced to mitigate this.
For example,
The space shuttle reentry problem from [Betts][1] used as a [test example][2] in dymos, the original source contains an example where the maximum heat flux is minimized. Such functionality could be implemented with the "loc" argument as:
phase.add_objective('q_c', loc='max')
[1]: J. Betts. Practical Methods for Optimal Control and Estimation Using Nonlinear Programming. Society for Industrial and Applied Mathematics, second edition, 2010. URL: https://epubs.siam.org/doi/abs/10.1137/1.9780898718577, arXiv:https://epubs.siam.org/doi/pdf/10.1137/1.9780898718577, doi:10.1137/1.9780898718577.
[2]: https://openmdao.github.io/dymos/examples/reentry/reentry.html
This has been done with pseudospectral methods before. Dymos currently doesn't have any direct way of implementing this, for a few reasons:
As you said, doing this naively can introduce discontinuous gradients that confuse the optimizer. When the node at which the maximum occurs switches, this tends to cause a sharp edge discontinuity in the gradient.
Since the pseudospectral methods are discrete, you cannot guarantee that the maximum will occur at a node. It's often fine to assume it does, but sometimes your requirements might demand more precision.
There are two possible ways to get around this.
The KSComp in OpenMDAO can be used as a "differentiable maximum". Add one after the trajectory, feed it the timeseries data for the output of interest, and set it up such that it returns a smooth approximation to the maximum. The KS function is a bit conservative, so it won't pick out the precise maximum, but depending on the value of the rho option it can be tuned to get pretty close.
When a more precise value of a maximum is needed, it's pretty common to set up a trajectory such that a phase ends when the maximum or minimum is reached.
If the variable whose maximum is being sought is a state, this can be done by adding a boundary constraint on the rate source for that state.
This ensures that the maximum occurs at the first or last node in the phase (depending on if its an initial or final boundary constraint). That lets you more accurately capture its value.
If the variable being sought is not a state, its possible to use the polynomials that are used for fitting states and controls in a phase to interpolate the variable of interest. By then taking the time derivative of that polynomial we can get a reasonably good approximation for its rate. The master branch of dymos has a method add_timeseries_rate_output that does this. And soon, within a few weeks hopefully, we'll add add_boundary_rate_constraint so that these interpolated rates can be easily used as boundary constraints.
In the meantime, you should be able to achieve this by adding the timeseries rate output and then manually applying the OpenMDAO method 'add_constraint' to the resulting timeseries output, using either indices=[0] or indices=[-1] to treat it as an initial or final constraint.
This is a common enough request that we'll add some documentation on how to achieve this behavior using both the KSComp approach and the boundary constraint approach.
Personally I'm not as much of a fan of KSComp because I've had trouble getting problems getting those types of objectives to converge in the past. I've used the slack variable and that has worked well. In the following example, we take a guess at the Rotor power in static analysis, and then we run a trajectory and get the actual rotor power during the mission. The objective was to minimize aircraft weight, so if you have a large amount of power in statics, that costs more weight. The constraint shown below prevents us from decreasing our updated guess of rotor power in statics below the maximum power required during the trajectory.
p.model.add_subsystem(
'static_power_check',
om.ExecComp('Power_check = Power_ODE - Power_statics',
Power_check = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_ODE = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_statics = {'value':0.0, 'units':'kW'}),
promotes_inputs=[
('Power_ODE','hop0.main_phase.timeseries.Power_R'), ('Power_statics','Power_{rotor,slack}')],
promotes_outputs=['Power_check'])
p.model.add_constraint('Power_check', upper=0, ref=1)
The constraint on the slack variable effectively helped us ensure that our slack rotor power matched the maximum rotor power during the mission. This allowed us to get the right sizes for the rotor parts (i.e. motors).

Speed of linear dynamical system trajectory

[warning: biologist asking a math question]
In a linear dynamical system (LDS), what feature of the matrix controls the speed of the trajectory in state space?
Say I have a matrix M describing how the LDS evolves per discrete time unit t. State after 10 t is given by M^10, and I'll call it the final state.
For the same initial condition, how should I modify M to make it reach the final state in arbitrary fewer (or more) time steps? Is it trivial?
thanks,

markov chains stationary distribution's condition about the init state

As a property of the markov chain, the stationary distribution has been widely used in many fields like page_rank etc.
However, since the distribution is just a property about the transition matrix and has nothing to do with the init state of the markov chain.
So what's the condition of the transition matrix make the init state has nothing to do with markov chain so it will finally arrive at the stationary distribution after nth iteration.
Markov chains aren't guaranteed to have unique stationary distributions. For example, consider a two state Markov Chain where the transition matrix is the identity matrix. That means that whatever the initial state is, it never changes. So in that case there is no stationary distribution that is independent of the initial case.
Where there is a stationary distribution, unless the initial state is the stationary distribution, the stationary distribution is only reached in the limit as n tends to infinity. So iteration n+1 will be closer to it that iteration n, but however large n is, it won't ever actually be the stationary distribution. However, for practical purposes (i.e. to the limit of the accuracy of floating point numbers in computers), the stationary state may well be reached after a handful of iterations.
You need the underlying graph to be strongly connected and aperiodic. If you want to find the stationary distribution of a periodic Markov chain just by running some chain, add "stay put" transitions with some constant probability to each node and scale the other transitions down appropriately.

What is the purpose in this part of the Monte Carlo path tracing algorithm?

In all of the simple algorithms for path tracing using lots of monte carlo samples the tracing the path part of the algorithm randomly chooses between returning with the emitted value for the current surface and continuing by tracing another ray from that surface's hemisphere (for example in the slides here). Like so:
TracePath(p, d) returns (r,g,b) [and calls itself recursively]:
Trace ray (p, d) to find nearest intersection p’
Select with probability (say) 50%:
Emitted:
return 2 * (Le_red, Le_green, Le_blue) // 2 = 1/(50%)
Reflected:
generate ray in random direction d’
return 2 * fr(d ->d’) * (n dot d’) * TracePath(p’, d’)
Is this just a way of using russian roulette to terminate a path while remaining unbiased? Surely it would make more sense to count the emissive and reflective properties for all ray paths together and use russian roulette just to decide whether to continue tracing or not.
And here's a follow up question: why do some of these algorithms I'm seeing (like in the book 'Physically Based Rendering Techniques') only compute emission once, instead of taking in to account all the emissive properties on an object? The rendering equation is basically
L_o = L_e + integral of (light exiting other surfaces in to the hemisphere of this surface)
which seems like it counts the emissive properties in both this L_o and the integral of all the other L_o's, so the algorithms should follow.
In reality, the single emission vs. reflection calculation is a bit too simplistic. To answer the first question, the coin-flip is used to terminate the ray but it leads to much greater biases. The second question is a bit more complex....
In the abstract of Shirley, Wang and Zimmerman TOG 94, the authors briefly summarize the benefits and complexities of Monte Carlo sampling:
In a distribution ray tracer, the crucial part of the direct lighting
calculation is the sampling strategy for shadow ray testing. Monte
Carlo integration with importance sampling is used to carry out this
calculation. Importance sampling involves the design of
integrand-specific probability density functions which are used to
generate sample points for the numerical quadrature. Probability
density functions are presented that aid in the direct lighting
calculation from luminaires of various simple shapes. A method for
defining a probability density function over a set of luminaires is
presented that allows the direct lighting calculation to be carried
out with one sample, regardless of the number of luminaires.
If we start dissecting that abstract, here are some of the important points:
Lights aren't points: in real life, we're almost never dealing with a point light source (e.g., a single LED).
Shadows are usually soft: this is a consequence of the non-point lights. It's very rare to see a truly hard-edged shadow in real life.
Noise (especially bright sampling artifacts) are disproportionately distracting: humans have a lot of intuition about how things should look. Look at slide 5 (the glass sphere on a table) in the OP's linked presentation. Note the bright specks in the shadow.
When rendering for more visual realism, both of the sets of reflected visibility rays and lighting calculation rays must be sampled and weighted according to the surface's bidirectional reflectance distribution function.
Note that this is a guided sampling method that's distinctly different from the original question's "generate ray in random direction" method in that it is both:
More accurate: the images in the linked PDF suffer a bit from the PDF process. Figure 10 is a reasonable representation of the original - note that lack of bright speckle artifacts that you will sometimes see (as in figure 5 of the original presentation).
Significantly faster: as the original presentation notes, unguided Monte Carlo sampling can take quite a while to converge. More sampling rays = much more computation = more time.
After reading the slides (thank you for posting), I'll amend my answer as best I can.
Is this just a way of using russian roulette to terminate a path
while remaining unbiased? Surely it would make more sense to count
the emissive and reflective properties for all ray paths together
and use russian roulette just to decide whether to continue tracing
or not.
Perhaps the emitted and reflected properties are treated differently because the reflected path depends on the incident path in a way that emitted paths do not (at least for a spectral surface). Does the algorithm take a Bayesian approach and use prior information about the incidence angle as a prior for predicting the reflective angle? Or is this a Feynman integration over all paths to come up with a probability? It's hard to tell without digging deeper into the details of the theory.
My earlier black body comment is quite incorrect. I see that the slides talk about (R, G, B) components; black body emissivities are integrated over all wavelengths.
And here's a follow up question: why do some of these algorithms I'm
seeing (like in the book 'Physically Based Rendering Techniques')
only compute emission once, instead of taking in to account all the
emissive properties on an object? The rendering equation is
basically
L_o = L_e + integral of (light exiting other surfaces in to the
hemisphere of this surface)
A single emissivity for the surface would assume that there's no functional relationship on wavelength or direction. I don't know how significant it is for rendering photo-realistic images.
The ones that are posted are certainly impressive. I wonder how different they would look if the complexities that you have in mind were included?
Thank you for posting a nice question - I'm voting it up. It's been a long time since I've thought about this kind of problem. I wish I could be more helpful.
Yes that is a very basic implementation of Russian Roulette, though normally the probability of terminating would take into account the light intensity (i.e. less light means the value contributes less to the final summation so use a higher probability of terminating).

fitness function and Selection for a Genetic Algorithm

I'm trying to design a nonlinear fitness function where I maximize variable A and minimize the variable B. The issue is that maximizing A is much more important at single digit values, almost logarithmic. B needs to be minimized and in contrast to A, it becomes less important when small (less than one) and more important when it's larger (>1), so exponential decay.
The main goal is to optimize A, so I guess an analog is A=profits, B=costs
Should I aim to keep everything positive so that the I can use a roulette wheel selection, or would it be better to use a rank/torunament kind of system? The purpose of my algorithm is shape optimization.
Thanks
When considering a multi-objective problem the goal is usually to identify all solutions that lie on the Pareto curve - the Pareto optimal set. Have a look here for a 2-dimensional visual example. When the algorithm completes you want a set of solutions that are not dominated by any other solution. You therefore need to define a pareto ranking mechanism to take into account both objectives - for a more in depth explanation, as well as links to even more reading, go here
With this in mind, in order to effectively explore all solutions along the pareto front you do not want an implementation that encourages premature convergence, otherwise your algorithm will only explore the search space in one specific area of the Pareto curve. I would implement a selection operator that keeps all members of each iteration's optimal set of solutions, that is all solutions which are not dominated by another + plus a parameter controlled percentage of other solutions. This way you encourage exploration all along the Pareto curve.
You also need to ensure your mutation and crossover operators are tuned correctly too. With any novel application of Evolutionary Algorithms, part of the problem is trying to identify an optimal parameter set for the problem domain... this is where it gets really interesting!!
The description is very vague, but assuming that you actually have an idea of what the function should look like and you're just wondering whether you need to modify it so that proportional selection can be used easily, then no. Regardless of fitness function, you should probably default to using something like tournament selection. Controlling selection pressure is one of the most important things you have to do in order to get consistently good results, and roulette wheel selection doesn't allow you that control. You typically get enormous pressure very early, which drives premature convergence. That might be preferable in a few cases, but it's not where I'd start my investigations.

Resources