Source of crackle in phase modulation synthesis - synthesis

I'm trying to make a simple phase modulation synthesizer based on wavetables and DDS. I have a 12bit wavetable containing 4096 sample of a sine wave and I'm using a 32bit phase accumulator.
Implementing my idea works but it seems to have a lot of low level crackle associated with modulating the depth of phase modulation. I'm generating my sample like so:
Modulator = Modulation*SineWavetable[PhaseAc2>>20];
Sample = SineWavetable[(PhaseAc1 + Modulator)>>20];
I thought the crackle could be generated by modulating the "Modulation" parameter a bit too hard/fast but this doesn't seem to be the cause of the problem. Could anybody enlighten me on potential problems with this method of phase modulation?
As ever, thanks!

As it turns out, typecasting is a very big deal here! I was trying to mix an int32_t (Modulator) with a uint32_t (PhaseAc1) and it was causing strange overflow problems where the phase would momentarily glitch, causing the audible problems. The phase accumulator was calculated outside of the array index section and shifted at a single variable like so:
Modulator = Modulation*SineWavetable[PhaseAc2>>20];
PhaseAc1 += (int32_t)Modulator;
Sample = SineWavetable[(PhaseAc1 + Modulator)>>20];

Related

Recomendations (functions/solution) to apply in OpenMDAO instead of boolean conditions (if/else)

I have been working for a couple of months with OpenMDAO and I find myself struggling with my code when I want to impose conditions for trying to replicate a physical/engineering behaviour.
I have tried using sigmoid functions, but I am still not convinced with that, due to the difficulty about trading off sensibility and numerical stabilization. Most of times I found overflows in exp so I end up including other conditionals (like np.where) so loosing linearity.
outputs['sigmoid'] = 1 / (1 + np.exp(-x))
I was looking for another kind of step function or something like that, able to keep linearity and derivability to the ease of the optimization. I don't know if something like that exists or if there is any strategy that can help me. If it helps, I am working with an OpenConcept benchmark, which uses vectorized computations ans Simpson's rule numerical integration.
Thank you very much.
PD: This is my first ever question in stackoverflow, so I would like to apologyze in advance for any error or bad practice commited. Hope to eventually collaborate and become active in the community.
Update after Justin answer:
I will take the opportunity to define a little bit more my problem and the strategy I tried. I am trying to monitorize and control thermodynamics conditions inside a tank. One of the things is to take actions when pressure P1 reaches certein threshold P2, for defining this:
eval= (inputs['P1'] - inputs['P2']) / (inputs['P1'] + inputs['P2'])
# P2 = threshold [Pa]
# P1 = calculated pressure [Pa]
k=100 #steepness control
outputs['sigmoid'] = (1 / (1 + np.exp(-eval * k)))
eval was defined in order avoid overflows normalizing the values, so when the threshold is recahed, corrections are taken. In a very similar way, I defined a function to check if there is still mass (so flowing can continue between systems):
eval= inputs['mass']/inputs['max']
k=50
outputs['sigmoid'] = (1 / (1 + np.exp(-eval*k)))**3
maxis also used for normalizing the value and the exponent is added for reaching zero before entering in the negative domain.
PLot (sorry it seems I cannot post images yet for my reputation)
It may be important to highlight that both mass and pressure are calculated from coupled ODE integration, in which this activation functions take part. I guess OpenConcept nature 'explore' a lot of possible values before arriving the solution, so most of the times giving negative infeasible values for massand pressure and creating overflows. For that sometimes I try to include:
eval[np.where(eval > 1.5)] = 1.5
eval[np.where(eval < -1.5)] = -1.5
That is not a beautiful but sometimes effective solution. I try to avoid using it since I taste that this bounds difficult solver and optimizer work.
I could give you a more complete answer if you distilled your question down to a specific code example of the function you're wrestling with and its expected input range. If you provide that code-sample, I'll update my answer.
Broadly, this is a common challenge when using gradient based optimization. You want some kind of behavior like an if-condition to turn something on/off and in many cases thats a fundamentally discontinuous function.
To work around that we often use sigmoid functions, but these do have some of the numerical challenges you pointed out. You could try a hyberbolic tangent as an alternative, though it may suffer the same kinds of problems.
I will give you two broad options:
Option 1
sometimes its ok (even if not ideal) to leave the purely discrete conditional in the code. Lets say you wanted to represent a kind of simple piecewise function:
y = 2x; x>=0
y = 0; x < 0
There is a sharp corner in that function right at 0. That corner is not differentiable, but the function is fine everywhere else. This is very much like the absolute value function in practice, though you might not draw the analogy looking at the piecewise definition of the function because the piecewise nature of abs is often hidden from you.
If you know (or at least can check after the fact) that your final answer will no lie right on or very near to that C1 discontinuity, then its probably fine to leave the code the way is is. Your derivatives will be well defined everywhere but right at 0 and you can simply pick the left or the right answer for 0.
Its not strictly mathematically correct, but it works fine as long as you're not ending up stuck right there.
Option 2
Apply a smoothing function. This can be a sigmoid, or a simple polynomial. The exact nature of the smoothing function is highly specific to the kind of discontinuity you are trying to approximate.
In the case of the piecewise function above, you might be tempted to define that function as:
2x*sig(x)
That would give you roughly the correct behavior, and would be differentiable everywhere. But wolfram alpha shows that it actually undershoots a little. Thats probably undesirable, so you can increase the exponent to mitigate that. This however, is where you start to get underflow and overflow problems.
So to work around that, and make a better behaved function all around, you could instead defined a three part piecewise polynomial:
y = 2x; x>=a
y = c0 + c1*x + c2*x**2; -a <= x < a
y = 0 x < -a
you can solve for the coefficients as a function of a (please double check my algebra before using this!):
c0 = 1.5a
c1 = 2
c2 = 1/(2a)
The nice thing about this approach is that it will never overshoot and go negative. You can also make a reasonably small and still get decent numerics. But if you try to make it too small, c2 will obviously blow up.
In general, I consider the sigmoid function to be a bit of a blunt instrument. It works fine in many cases, but if you try to make it approximate a step function too closely, its a nightmare. If you want to represent physical processes, I find polynomial fillet functions work more nicely.
It takes a little effort to derive that polynomial, because you want it to be c1 continuous on both sides of the curve. So you have to construct the system of equations to solve for it as a function of the polynomial order and the specific relaxation you want (0.1 here).
My goto has generally been to consult the table of activation functions on wikipedia: https://en.wikipedia.org/wiki/Activation_function
I've had good luck with sigmoid and the hyperbolic tangent, scaling them such that we can choose the lower and upper values as well as choosing the location of the activation on the x-axis and the steepness.
Dymos uses a vectorization that I think is similar to OpenConcept and I've had success with numpy.where there as well, providing derivatives for each possible "branch" taken. It is true that you may have issues with derivative mismatches if you have an analysis point right on the transition, but often I've had success despite that. If the derivative at the transition becomes a hinderance then implementing a sigmoid or relu are more appropriate.
If x is of a magnitude such that it can cause overflows, consider applying units or using scaling to put it within reasonable limits if you cannot bound it directly.

Trouble implementing a very simple mass flow source

I am currently learning Modelica by trying some very simple examples. I have defined a connector Incompressible for an incompressible fluid like this:
connector Incompressible
flow Modelica.SIunits.VolumeFlowRate V_dot;
Modelica.SIunits.SpecificEnthalpy h;
Modelica.SIunits.Pressure p;
end Incompressible;
I now wish to define a mass or volume flow source:
model Source_incompressible
parameter Modelica.SIunits.VolumeFlowRate V_dot;
parameter Modelica.SIunits.Temperature T;
parameter Modelica.SIunits.Pressure p;
Incompressible outlet;
equation
outlet.V_dot = V_dot;
outlet.h = enthalpyWaterIncompressible(T); // quick'n'dirty enthalpy function
outlet.p = p;
end Source_incompressible;
However, when checking Source_incompressible, I get this:
The problem is structurally singular for the element type Real.
The number of scalar Real unknown elements are 3.
The number of scalar Real equation elements are 4.
I am at a loss here. Clearly, there are three equations in the model - where does the fourth equation come from?
Thanks a lot for any insight.
Dominic,
There are a couple of issues going on here. As Martin points out, the connector is unbalanced (you don't have matching "through" and "across" pairs in that connector). For fluid systems, this is acceptable. However, intensive fluid properties (e.g., enthalpy) have to be marked as so-called "stream" variables.
This topic is, admittedly, pretty complicated. I'm planning on adding an advanced chapter to my online Modelica book on this topic but I haven't had the time yet. In the meantime, I would suggest you have a look at the Modelica.Fluid library and/or this presentation by one of its authors, Francesco Casella.
That connector is not a physical connector. You need one flow variable for each potential variable. This is the OpenModelica error message if it helps a little:
Warning: Connector .Incompressible is not balanced: The number of potential variables (2) is not equal to the number of flow variables (1).
Error: Too many equations, over-determined system. The model has 4 equation(s) and 3 variable(s).
Error: Internal error Found Equation without time dependent variables outlet.V_dot = V_dot
This is because the unconnected connector will generate one equation for the flow:
outlet.V_dot = 0.0;
This means outlet.V_dot is replaced in:
outlet.V_dot = V_dot;
And you get:
0.0 = V_dot;
But V_dot is a parameter and can not be assigned to in an equation section (needs an initial equation if the parameter has fixed=false, or a binding equation in the default case).

Strange behavior when implementing Back propagation in DBN

Currently I'm trying to implement the Deep Belief Network. But I've met a very strange problem. My source code can be found here: https://github.com/mistree/GoDeep/blob/master/GoDeep/
I first implemented the RBM using CD and it works perfectly (by using the concurrency feature of Golang it's quite fast). Then I start to implement a normal feed forward network with back propagation and then the strange thing happens. It seems very unstable. When I run it with xor gate test it sometimes fails, only when I set the hidden layer nodes to 10 or more then it never fails. Below is how I calculate it
Step 1 : calculate all the activation with bias
Step 2 : calculate the output error
Step 3 : back propagate the error to each node
Step 4 : calculate the delta weight and bias for each node with momentum
Step 1 to Step 4 I do a full batch and sum up these delta weight and bias
Step 5 : apply the averaged delta weight and bias
I followed the tutorial here http://ufldl.stanford.edu/wiki/index.php/Backpropagation_Algorithm
And normally it works if I give it more hidden layer nodes. My test code is here https://github.com/mistree/GoDeep/blob/master/Test.go
So I think it should work and start to implement the DBN by combining the RBM and normal NN. However then the result becomes really bad. It even can't learn a xor gate in 1000 iteration. And sometimes goes totally wrong. I tried to debug with that so after the PreTrain of DBN I do a reconstruction. Most times the reconstruction looks good but the back propagation even fails when the preTrain result is perfect.
I really don't know what's wrong with the back propagation. I must misunderstood the algorithm or made some big mistakes in the implementation.
If possible please run the test code and you'll see how weird it is. The code it self is quite readable. Any hint will be great help.Thanks in advance
I remember Hinton saying you cant train RBM's on an XOR, something about the vector space that doesnt allow a two layer network to work. Deeper networks have less linear properties that allow it to work.

What is (functional) reactive programming?

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've read the Wikipedia article on reactive programming. I've also read the small article on functional reactive programming. The descriptions are quite abstract.
What does functional reactive programming (FRP) mean in practice?
What does reactive programming (as opposed to non-reactive programming?) consist of?
My background is in imperative/OO languages, so an explanation that relates to this paradigm would be appreciated.
If you want to get a feel for FRP, you could start with the old Fran tutorial from 1998, which has animated illustrations. For papers, start with Functional Reactive Animation and then follow up on links on the publications link on my home page and the FRP link on the Haskell wiki.
Personally, I like to think about what FRP means before addressing how it might be implemented.
(Code without a specification is an answer without a question and thus "not even wrong".)
So I don't describe FRP in representation/implementation terms as Thomas K does in another answer (graphs, nodes, edges, firing, execution, etc).
There are many possible implementation styles, but no implementation says what FRP is.
I do resonate with Laurence G's simple description that FRP is about "datatypes that represent a value 'over time' ".
Conventional imperative programming captures these dynamic values only indirectly, through state and mutations.
The complete history (past, present, future) has no first class representation.
Moreover, only discretely evolving values can be (indirectly) captured, since the imperative paradigm is temporally discrete.
In contrast, FRP captures these evolving values directly and has no difficulty with continuously evolving values.
FRP is also unusual in that it is concurrent without running afoul of the theoretical & pragmatic rats' nest that plagues imperative concurrency.
Semantically, FRP's concurrency is fine-grained, determinate, and continuous.
(I'm talking about meaning, not implementation. An implementation may or may not involve concurrency or parallelism.)
Semantic determinacy is very important for reasoning, both rigorous and informal.
While concurrency adds enormous complexity to imperative programming (due to nondeterministic interleaving), it is effortless in FRP.
So, what is FRP?
You could have invented it yourself.
Start with these ideas:
Dynamic/evolving values (i.e., values "over time") are first class values in themselves. You can define them and combine them, pass them into & out of functions. I called these things "behaviors".
Behaviors are built up out of a few primitives, like constant (static) behaviors and time (like a clock), and then with sequential and parallel combination. n behaviors are combined by applying an n-ary function (on static values), "point-wise", i.e., continuously over time.
To account for discrete phenomena, have another type (family) of "events", each of which has a stream (finite or infinite) of occurrences. Each occurrence has an associated time and value.
To come up with the compositional vocabulary out of which all behaviors and events can be built, play with some examples. Keep deconstructing into pieces that are more general/simple.
So that you know you're on solid ground, give the whole model a compositional foundation, using the technique of denotational semantics, which just means that (a) each type has a corresponding simple & precise mathematical type of "meanings", and (b) each primitive and operator has a simple & precise meaning as a function of the meanings of the constituents.
Never, ever mix implementation considerations into your exploration process. If this description is gibberish to you, consult (a) Denotational design with type class morphisms, (b) Push-pull functional reactive programming (ignoring the implementation bits), and (c) the Denotational Semantics Haskell wikibooks page. Beware that denotational semantics has two parts, from its two founders Christopher Strachey and Dana Scott: the easier & more useful Strachey part and the harder and less useful (for software design) Scott part.
If you stick with these principles, I expect you'll get something more-or-less in the spirit of FRP.
Where did I get these principles? In software design, I always ask the same question: "what does it mean?".
Denotational semantics gave me a precise framework for this question, and one that fits my aesthetics (unlike operational or axiomatic semantics, both of which leave me unsatisfied).
So I asked myself what is behavior?
I soon realized that the temporally discrete nature of imperative computation is an accommodation to a particular style of machine, rather than a natural description of behavior itself.
The simplest precise description of behavior I can think of is simply "function of (continuous) time", so that's my model.
Delightfully, this model handles continuous, deterministic concurrency with ease and grace.
It's been quite a challenge to implement this model correctly and efficiently, but that's another story.
In pure functional programming, there are no side-effects. For many types of software (for example, anything with user interaction) side-effects are necessary at some level.
One way to get side-effect like behavior while still retaining a functional style is to use functional reactive programming. This is the combination of functional programming, and reactive programming. (The Wikipedia article you linked to is about the latter.)
The basic idea behind reactive programming is that there are certain datatypes that represent a value "over time". Computations that involve these changing-over-time values will themselves have values that change over time.
For example, you could represent the mouse coordinates as a pair of integer-over-time values. Let's say we had something like (this is pseudo-code):
x = <mouse-x>;
y = <mouse-y>;
At any moment in time, x and y would have the coordinates of the mouse. Unlike non-reactive programming, we only need to make this assignment once, and the x and y variables will stay "up to date" automatically. This is why reactive programming and functional programming work so well together: reactive programming removes the need to mutate variables while still letting you do a lot of what you could accomplish with variable mutations.
If we then do some computations based on this the resulting values will also be values that change over time. For example:
minX = x - 16;
minY = y - 16;
maxX = x + 16;
maxY = y + 16;
In this example, minX will always be 16 less than the x coordinate of the mouse pointer. With reactive-aware libraries you could then say something like:
rectangle(minX, minY, maxX, maxY)
And a 32x32 box will be drawn around the mouse pointer and will track it wherever it moves.
Here is a pretty good paper on functional reactive programming.
An easy way of reaching a first intuition about what it's like is to imagine your program is a spreadsheet and all of your variables are cells. If any of the cells in a spreadsheet change, any cells that refer to that cell change as well. It's just the same with FRP. Now imagine that some of the cells change on their own (or rather, are taken from the outside world): in a GUI situation, the position of the mouse would be a good example.
That necessarily misses out rather a lot. The metaphor breaks down pretty fast when you actually use a FRP system. For one, there are usually attempts to model discrete events as well (e.g. the mouse being clicked). I'm only putting this here to give you an idea what it's like.
To me it is about 2 different meanings of symbol =:
In math x = sin(t) means, that x is different name for sin(t). So writing x + y is the same thing as sin(t) + y. Functional reactive programming is like math in this respect: if you write x + y, it is computed with whatever the value of t is at the time it's used.
In C-like programming languages (imperative languages), x = sin(t) is an assignment: it means that x stores the value of sin(t) taken at the time of the assignment.
OK, from background knowledge and from reading the Wikipedia page to which you pointed, it appears that reactive programming is something like dataflow computing but with specific external "stimuli" triggering a set of nodes to fire and perform their computations.
This is pretty well suited to UI design, for example, in which touching a user interface control (say, the volume control on a music playing application) might need to update various display items and the actual volume of audio output. When you modify the volume (a slider, let's say) that would correspond to modifying the value associated with a node in a directed graph.
Various nodes having edges from that "volume value" node would automatically be triggered and any necessary computations and updates would naturally ripple through the application. The application "reacts" to the user stimulus. Functional reactive programming would just be the implementation of this idea in a functional language, or generally within a functional programming paradigm.
For more on "dataflow computing", search for those two words on Wikipedia or using your favorite search engine. The general idea is this: the program is a directed graph of nodes, each performing some simple computation. These nodes are connected to each other by graph links that provide the outputs of some nodes to the inputs of others.
When a node fires or performs its calculation, the nodes connected to its outputs have their corresponding inputs "triggered" or "marked". Any node having all inputs triggered/marked/available automatically fires. The graph might be implicit or explicit depending on exactly how reactive programming is implemented.
Nodes can be looked at as firing in parallel, but often they are executed serially or with limited parallelism (for example, there may be a few threads executing them). A famous example was the Manchester Dataflow Machine, which (IIRC) used a tagged data architecture to schedule execution of nodes in the graph through one or more execution units. Dataflow computing is fairly well suited to situations in which triggering computations asynchronously giving rise to cascades of computations works better than trying to have execution be governed by a clock (or clocks).
Reactive programming imports this "cascade of execution" idea and seems to think of the program in a dataflow-like fashion but with the proviso that some of the nodes are hooked to the "outside world" and the cascades of execution are triggered when these sensory-like nodes change. Program execution would then look like something analogous to a complex reflex arc. The program may or may not be basically sessile between stimuli or may settle into a basically sessile state between stimuli.
"non-reactive" programming would be programming with a very different view of the flow of execution and relationship to external inputs. It's likely to be somewhat subjective, since people will likely be tempted to say anything that responds to external inputs "reacts" to them. But looking at the spirit of the thing, a program that polls an event queue at a fixed interval and dispatches any events found to functions (or threads) is less reactive (because it only attends to user input at a fixed interval). Again, it's the spirit of the thing here: one can imagine putting a polling implementation with a fast polling interval into a system at a very low level and program in a reactive fashion on top of it.
After reading many pages about FRP I finally came across this enlightening writing about FRP, it finally made me understand what FRP really is all about.
I quote below Heinrich Apfelmus (author of reactive banana).
What is the essence of functional reactive programming?
A common answer would be that “FRP is all about describing a system in
terms of time-varying functions instead of mutable state”, and that
would certainly not be wrong. This is the semantic viewpoint. But in
my opinion, the deeper, more satisfying answer is given by the
following purely syntactic criterion:
The essence of functional reactive programming is to specify the dynamic behavior of a value completely at the time of declaration.
For instance, take the example of a counter: you have two buttons
labelled “Up” and “Down” which can be used to increment or decrement
the counter. Imperatively, you would first specify an initial value
and then change it whenever a button is pressed; something like this:
counter := 0 -- initial value
on buttonUp = (counter := counter + 1) -- change it later
on buttonDown = (counter := counter - 1)
The point is that at the time of declaration, only the initial value
for the counter is specified; the dynamic behavior of counter is
implicit in the rest of the program text. In contrast, functional
reactive programming specifies the whole dynamic behavior at the time
of declaration, like this:
counter :: Behavior Int
counter = accumulate ($) 0
(fmap (+1) eventUp
`union` fmap (subtract 1) eventDown)
Whenever you want to understand the dynamics of counter, you only have
to look at its definition. Everything that can happen to it will
appear on the right-hand side. This is very much in contrast to the
imperative approach where subsequent declarations can change the
dynamic behavior of previously declared values.
So, in my understanding an FRP program is a set of equations:
j is discrete: 1,2,3,4...
f depends on t so this incorporates the possiblilty to model external stimuli
all state of the program is encapsulated in variables x_i
The FRP library takes care of progressing time, in other words, taking j to j+1.
I explain these equations in much more detail in this video.
EDIT:
About 2 years after the original answer, recently I came to the conclusion that FRP implementations have another important aspect. They need to (and usually do) solve an important practical problem: cache invalidation.
The equations for x_i-s describe a dependency graph. When some of the x_i changes at time j then not all the other x_i' values at j+1 need to be updated, so not all the dependencies need to be recalculated because some x_i' might be independent from x_i.
Furthermore, x_i-s that do change can be incrementally updated. For example let's consider a map operation f=g.map(_+1) in Scala, where f and g are List of Ints. Here f corresponds to x_i(t_j) and g is x_j(t_j). Now if I prepend an element to g then it would be wasteful to carry out the map operation for all the elements in g. Some FRP implementations (for example reflex-frp) aim to solve this problem. This problem is also known as incremental computing.
In other words, behaviours (the x_i-s ) in FRP can be thought as cache-ed computations. It is the task of the FRP engine to efficiently invalidate and recompute these cache-s (the x_i-s) if some of the f_i-s do change.
The paper Simply efficient functional reactivity by Conal Elliott (direct PDF, 233 KB) is a fairly good introduction. The corresponding library also works.
The paper is now superceded by another paper, Push-pull functional reactive programming (direct PDF, 286 KB).
Disclaimer: my answer is in the context of rx.js - a 'reactive programming' library for Javascript.
In functional programming, instead of iterating through each item of a collection, you apply higher order functions (HoFs) to the collection itself. So the idea behind FRP is that instead of processing each individual event, create a stream of events (implemented with an observable*) and apply HoFs to that instead. This way you can visualize the system as data pipelines connecting publishers to subscribers.
The major advantages of using an observable are:
i) it abstracts away state from your code, e.g., if you want the event handler to get fired only for every 'n'th event, or stop firing after the first 'n' events, or start firing only after the first 'n' events, you can just use the HoFs (filter, takeUntil, skip respectively) instead of setting, updating and checking counters.
ii) it improves code locality - if you have 5 different event handlers changing the state of a component, you can merge their observables and define a single event handler on the merged observable instead, effectively combining 5 event handlers into 1. This makes it very easy to reason about what events in your entire system can affect a component, since it's all present in a single handler.
An Observable is the dual of an Iterable.
An Iterable is a lazily consumed sequence - each item is pulled by the iterator whenever it wants to use it, and hence the enumeration is driven by the consumer.
An observable is a lazily produced sequence - each item is pushed to the observer whenever it is added to the sequence, and hence the enumeration is driven by the producer.
Dude, this is a freaking brilliant idea! Why didn't I find out about this back in 1998? Anyway, here's my interpretation of the Fran tutorial. Suggestions are most welcome, I am thinking about starting a game engine based on this.
import pygame
from pygame.surface import Surface
from pygame.sprite import Sprite, Group
from pygame.locals import *
from time import time as epoch_delta
from math import sin, pi
from copy import copy
pygame.init()
screen = pygame.display.set_mode((600,400))
pygame.display.set_caption('Functional Reactive System Demo')
class Time:
def __float__(self):
return epoch_delta()
time = Time()
class Function:
def __init__(self, var, func, phase = 0., scale = 1., offset = 0.):
self.var = var
self.func = func
self.phase = phase
self.scale = scale
self.offset = offset
def copy(self):
return copy(self)
def __float__(self):
return self.func(float(self.var) + float(self.phase)) * float(self.scale) + float(self.offset)
def __int__(self):
return int(float(self))
def __add__(self, n):
result = self.copy()
result.offset += n
return result
def __mul__(self, n):
result = self.copy()
result.scale += n
return result
def __inv__(self):
result = self.copy()
result.scale *= -1.
return result
def __abs__(self):
return Function(self, abs)
def FuncTime(func, phase = 0., scale = 1., offset = 0.):
global time
return Function(time, func, phase, scale, offset)
def SinTime(phase = 0., scale = 1., offset = 0.):
return FuncTime(sin, phase, scale, offset)
sin_time = SinTime()
def CosTime(phase = 0., scale = 1., offset = 0.):
phase += pi / 2.
return SinTime(phase, scale, offset)
cos_time = CosTime()
class Circle:
def __init__(self, x, y, radius):
self.x = x
self.y = y
self.radius = radius
#property
def size(self):
return [self.radius * 2] * 2
circle = Circle(
x = cos_time * 200 + 250,
y = abs(sin_time) * 200 + 50,
radius = 50)
class CircleView(Sprite):
def __init__(self, model, color = (255, 0, 0)):
Sprite.__init__(self)
self.color = color
self.model = model
self.image = Surface([model.radius * 2] * 2).convert_alpha()
self.rect = self.image.get_rect()
pygame.draw.ellipse(self.image, self.color, self.rect)
def update(self):
self.rect[:] = int(self.model.x), int(self.model.y), self.model.radius * 2, self.model.radius * 2
circle_view = CircleView(circle)
sprites = Group(circle_view)
running = True
while running:
for event in pygame.event.get():
if event.type == QUIT:
running = False
if event.type == KEYDOWN and event.key == K_ESCAPE:
running = False
screen.fill((0, 0, 0))
sprites.update()
sprites.draw(screen)
pygame.display.flip()
pygame.quit()
In short: If every component can be treated like a number, the whole system can be treated like a math equation, right?
Paul Hudak's book, The Haskell School of Expression, is not only a fine introduction to Haskell, but it also spends a fair amount of time on FRP. If you're a beginner with FRP, I highly recommend it to give you a sense of how FRP works.
There is also what looks like a new rewrite of this book (released 2011, updated 2014), The Haskell School of Music.
According to the previous answers, it seems that mathematically, we simply think in a higher order. Instead of thinking a value x having type X, we think of a function x: T → X, where T is the type of time, be it the natural numbers, the integers or the continuum. Now when we write y := x + 1 in the programming language, we actually mean the equation y(t) = x(t) + 1.
Acts like a spreadsheet as noted. Usually based on an event driven framework.
As with all "paradigms", it's newness is debatable.
From my experience of distributed flow networks of actors, it can easily fall prey to a general problem of state consistency across the network of nodes i.e. you end up with a lot of oscillation and trapping in strange loops.
This is hard to avoid as some semantics imply referential loops or broadcasting, and can be quite chaotic as the network of actors converges (or not) on some unpredictable state.
Similarly, some states may not be reached, despite having well-defined edges, because the global state steers away from the solution. 2+2 may or may not get to be 4 depending on when the 2's became 2, and whether they stayed that way. Spreadsheets have synchronous clocks and loop detection. Distributed actors generally don't.
All good fun :).
I found this nice video on the Clojure subreddit about FRP. It is pretty easy to understand even if you don't know Clojure.
Here's the video: http://www.youtube.com/watch?v=nket0K1RXU4
Here's the source the video refers to in the 2nd half: https://github.com/Cicayda/yolk-examples/blob/master/src/yolk_examples/client/autocomplete.cljs
This article by Andre Staltz is the best and clearest explanation I've seen so far.
Some quotes from the article:
Reactive programming is programming with asynchronous data streams.
On top of that, you are given an amazing toolbox of functions to combine, create and filter any of those streams.
Here's an example of the fantastic diagrams that are a part of the article:
It is about mathematical data transformations over time (or ignoring time).
In code this means functional purity and declarative programming.
State bugs are a huge problem in the standard imperative paradigm. Various bits of code may change some shared state at different "times" in the programs execution. This is hard to deal with.
In FRP you describe (like in declarative programming) how data transforms from one state to another and what triggers it. This allows you to ignore time because your function is simply reacting to its inputs and using their current values to create a new one. This means that the state is contained in the graph (or tree) of transformation nodes and is functionally pure.
This massively reduces complexity and debugging time.
Think of the difference between A=B+C in math and A=B+C in a program.
In math you are describing a relationship that will never change. In a program, its says that "Right now" A is B+C. But the next command might be B++ in which case A is not equal to B+C. In math or declarative programming A will always be equal to B+C no matter what point in time you ask.
So by removing the complexities of shared state and changing values over time. You program is much easier to reason about.
An EventStream is an EventStream + some transformation function.
A Behaviour is an EventStream + Some value in memory.
When the event fires the value is updated by running the transformation function. The value that this produces is stored in the behaviours memory.
Behaviours can be composed to produce new behaviours that are a transformation on N other behaviours. This composed value will recalculate as the input events (behaviours) fire.
"Since observers are stateless, we often need several of them to simulate a state machine as in the drag example. We have to save the state where it is accessible to all involved observers such as in the variable path above."
Quote from - Deprecating The Observer Pattern
http://infoscience.epfl.ch/record/148043/files/DeprecatingObserversTR2010.pdf
The short and clear explanation about Reactive Programming appears on Cyclejs - Reactive Programming, it uses simple and visual samples.
A [module/Component/object] is reactive means it is fully responsible
for managing its own state by reacting to external events.
What is the benefit of this approach? It is Inversion of Control,
mainly because [module/Component/object] is responsible for itself, improving encapsulation using private methods against public ones.
It is a good startup point, not a complete source of knowlege. From there you could jump to more complex and deep papers.
Check out Rx, Reactive Extensions for .NET. They point out that with IEnumerable you are basically 'pulling' from a stream. Linq queries over IQueryable/IEnumerable are set operations that 'suck' the results out of a set. But with the same operators over IObservable you can write Linq queries that 'react'.
For example, you could write a Linq query like
(from m in MyObservableSetOfMouseMovements
where m.X<100 and m.Y<100
select new Point(m.X,m.Y)).
and with the Rx extensions, that's it: You have UI code that reacts to the incoming stream of mouse movements and draws whenever you're in the 100,100 box...
FRP is a combination of Functional programming(programming paradigm built upon the idea of everything is a function) and reactive programming paradigm (built upon the idea that everything is a stream(observer and observable philosophy)). It is supposed to be the best of the worlds.
Check out Andre Staltz post on reactive programming to start with.

Detecting and fixing overflows

we have a particle detector hard-wired to use 16-bit and 8-bit buffers. Every now and then, there are certain [predicted] peaks of particle fluxes passing through it; that's okay. What is not okay is that these fluxes usually reach magnitudes above the capacity of the buffers to store them; thus, overflows occur. On a chart, they look like the flux suddenly drops and begins growing again. Can you propose a [mostly] accurate method of detecting points of data suffering from an overflow?
P.S. The detector is physically inaccessible, so fixing it the 'right way' by replacing the buffers doesn't seem to be an option.
Update: Some clarifications as requested. We use python at the data processing facility; the technology used in the detector itself is pretty obscure (treat it as if it was developed by a completely unrelated third party), but it is definitely unsophisticated, i.e. not running a 'real' OS, just some low-level stuff to record the detector readings and to respond to remote commands like power cycle. Memory corruption and other problems are not an issue right now. The overflows occur simply because the designer of the detector used 16-bit buffers for counting the particle flux, and sometimes the flux exceeds 65535 particles per second.
Update 2: As several readers have pointed out, the intended solution would have something to do with analyzing the flux profile to detect sharp declines (e.g. by an order of magnitude) in an attempt to separate them from normal fluctuations. Another problem arises: can restorations (points where the original flux drops below the overflowing level) be detected by simply running the correction program against the reverted (by the x axis) flux profile?
int32[] unwrap(int16[] x)
{
// this is pseudocode
int32[] y = new int32[x.length];
y[0] = x[0];
for (i = 1:x.length-1)
{
y[i] = y[i-1] + sign_extend(x[i]-x[i-1]);
// works fine as long as the "real" value of x[i] and x[i-1]
// differ by less than 1/2 of the span of allowable values
// of x's storage type (=32768 in the case of int16)
// Otherwise there is ambiguity.
}
return y;
}
int32 sign_extend(int16 x)
{
return (int32)x; // works properly in Java and in most C compilers
}
// exercise for the reader to write similar code to unwrap 8-bit arrays
// to a 16-bit or 32-bit array
Of course, ideally you'd fix the detector software to max out at 65535 to prevent wraparound of the sort that is causing your grief. I understand that this isn't always possible, or at least isn't always possible to do quickly.
When the particle flux exceeds 65535, does it do so quickly, or does the flux gradually increase and then gradually decrease? This makes a difference in what algorithm you might use to detect this. For example, if the flux goes up slowly enough:
true flux measurement
5000 5000
10000 10000
30000 30000
50000 50000
70000 4465
90000 24465
60000 60000
30000 30000
10000 10000
then you'll tend to have a large negative drop at times when you have overflowed. A much larger negative drop than you'll have at any other time. This can serve as a signal that you've overflowed. To find the end of the overflow time period, you could look for a large jump to a value not too far from 65535.
All of this depends on the maximum true flux that is possible and on how rapidly the flux rises and falls. For example, is it possible to get more than 128k counts in one measurement period? Is it possible for one measurement to be 5000 and the next measurement to be 50000? If the data is not well-behaved enough, you may be able to make only statistical judgment about when you have overflowed.
Your question needs to provide more information about your implementation - what language/framework are you using?
Data overflows in software (which is what I think you're talking about) are bad practice and should be avoided. While you are seeing (strange data output) is only one side effect that is possible when experiencing data overflows, but it is merely the tip of the iceberg of the sorts of issues you can see.
You could quite easily experience more serious issues like memory corruption, which can cause programs to crash loudly, or worse, obscurely.
Is there any validation you can do to prevent the overflows from occurring in the first place?
I really don't think you can fix it without fixing the underlying buffers. How are you supposed to tell the difference between the sequences of values (0, 1, 2, 1, 0) and (0, 1, 65538, 1, 0)? You can't.
How about using an HMM where the hidden state is whether you are in an overflow and the emissions are observed particle flux?
The tricky part would be coming up with the probability models for the transitions (which will basically encode the time-scale of peaks) and for the emissions (which you can build if you know how the flux behaves and how overflow affects measurement). These are domain-specific questions, so there probably aren't ready-made solutions out there.
But one you have the model, everything else---fitting your data, quantifying uncertainty, simulation, etc.---is routine.
You can only do this if the actual jumps between successive values are much smaller than 65536. Otherwise, an overflow-induced valley artifact is indistinguishable from a real valley, you can only guess. You can try to match overflows to corresponding restorations, by simultaneously analysing a signal from the right and the left (assuming that there is a recognizable base line).
Other than that, all you can do is to adjust your experiment by repeating it with different original particle flows, so that real valleys will not move, but artifact ones move to the point of overflow.

Resources