FRP vs. State Machine w/ Lenses for Game Loop - functional-programming

I'm trying to understand the practical difference between a FRP graph and a State Machine with lenses- specifically for something like a game loop where the entire state is re-drawn every tick.
Using javascript syntax, the following implementations would both essentially work:
Option 1: State Machine w/ Lenses
//Using Sanctuary and partial.lenses (or Ramda) primitives
//Each update takes the state, modifies it with a lens, and returns it
let state = initialValues;
eventSource.addEventListener(update, () => {
state = S.pipe([
updateCharacter,
updateBackground,
])
(state) //the first call has the initial settings
render(state);
});
Option 2: FRP
//Using Sodium primitives
//It's possible this isn't the best way to structure it, feel free to advise
cCharacter = sUpdate.accum(initialCharacter, updateCharacter)
cBackground = sUpdate.accum(initialBackground, updateBackground)
cState = cCharacter.lift(cBackground, mergeGameObjects)
cState.listen(render)
I see that Option 1 allows any update to get or set data anywhere in the game state, however all the cells/behaviors in Option 2 could be adjusted to be of type GameState and then the same thing applies. If this were the case, then I'm really confused about the difference since that would then just boil down to:
cGameState = sUpdate
.accum(initialGameState, S.pipe(...updates))
.listen(render)
And then they're really very equivalent...
Another way to achieve that goal would be to store all the Cells in some global reference, and then any other cell could sample them for reading. New updates could be propagated for communicating. That solution also feels quite similar to Option 1 at the end of the day.
Is there a way to structure the FRP graph in such a way that it offers clear advantages over the event-driven state machine, in this scenario?

I'm not quite sure what your question is, also because you keep changing the second example in your explanatory text.
In any case, the key benefit of the FRP approach — as I see it — is the following: The game state depends on many things, but they are all listed explicitly on the right-hand side of the definition of cGameState.
In contrast, in the imperative style, you have a global variable state which may or may not be changed by code that is not shown in the snippet you just presented. For all I know, the next line could be
eventSource2.addEventListener(update, () => { state = state + 1; })
and the game state suddenly depends on a second event source, a fact that is not apparent from the snippet you showed. This cannot happen in the FRP example: All dependencies of cGameState are explicit on the right-hand side. (They may be very complicated, sure, but at least they are explicit.)

Related

Checking AppState prior to dispatching Redux Action?

Say I have a Redux store that keeps track of an AppState that is comprised of a single 'color' variable as a string.
initialState = {
color: 'red'
}
And an Action for updating this:
const SET_COLOR = 'SET_COLOR';
function setColor(color) {
return { type: SET_COLOR, color };
}
And say I have some sort of input that allows the user to set the color to whatever they please. (How this is done is irrelevant)
let newColor = <got new color somehow>
Now lets say the user inputs 'red' (the same as the current state). Should I care if the newColor and the current color differ? Ie, should I first check the store to see if the newColor is different than the old color, and only dispatch the setColor action IFF the color is different? Or should I just dispatch the setColor action, regardless if there's a difference.
If you do it correctly (preferably using a good immutable data type for your state, e.g. immutable.js), then the new state returned by your reducer is equal to the previous state and the component will not re-render (provided you have a PureComponent or the componentShouldUpdate returns false because the state hasn't changed). So dispatching a few extra actions is practically no extra burden on your app.
In general, I would say do the easiest thing and just call setColor again. The reason is that it keeps your logic more straightforward. Any time the user changes the color via an input field, then your code will dispatch the action. Now, you only need to write test case to verify this behavior. This may sound trivial but 1) it adds up and 2) you would also have to test the case of rapidly switching back and forth between two colors to be sure your code works.
I would filter out the newColor() dispatch if there was an explicit reason to do so, such as:
Performance shows it is needed
The UI behavior needs to be different in these two scenarios to let the user know they haven't changed anything
changing the color has knock on side effects of related data that are undesirable (for example, changing the color might reset the shape to a triangle)
Or similar. The point being, do the simple thing by default unless there's a reason not to.
Redux actions are designed to be cheap. Don't be afraid of dispatching. It's a similar philosophy to React. Render, render, render, and let the framework do the heavy lifting.

Programming: Detect the direction and stop of change of a number

Any language, just pseudocode.
I'm searching around for an algorithm that detects direction and the stop of changes to a number. E.g.:
function detectChange(int number) {
if number is rising return "rising"
if number is dropping return "dropping"
if number is unchanged return "unchanged"
}
main() {
int number
while(true) {
//The read doesn't always happen
if readObscure.readoccured() {
//read the number from an obscure source
number = readObscure()
print(detectChange(number))
}
}
}
I've been working on an approach with a time delta but with little success. One problem is e.g. that with the timing approach I always miss the last change. Maybe I could solve that too but it's already pretty hacky.
So I'd be glad about a clean "textbook" solution, preferably without using time but just logic. If there's none without time, but still a clean solution, I'd appreciate that too.
Solution can be written in any "human readable" language (no haskell please) or pseudocode, I don't care.
I should have mentioned that the readObscure() function may also return the same number over and over again, or won't return a number at all, in which case I want to assume that the number is "unchanged".
Let's also update this with some examples:
readObscure() returns the numbers 1,2,14,15,8,17,20
This should be "rising"
readObscure() returns the numbers 1,2,14,15,17,20,20,20
This should be "rising" and then "unchanged"
So the question also is, how to define rising, unchanged, dropping. I'd like someone who maybe worked on those problems before to define it. The result should equal a "human sorting", so I look at the numbers and can immediatly tell, they are not rising, or they are rising.
I've been made aware of Rx (Reactive Extensions)
But for my personal case this is using a sledge-hammer to crack a nut.
Just make it so that whenever you add a value:
Take the value of the current and the last value, compute its delta.
Then, add it to wherever you're holding the deltas.
If you want something to "fire" everytime you "add a value," it's probably best to bind it to the container or some sort of callback/event-based mechanism/structure to ensure this. Boost.Signals2 (C++) is supposed to be a good way to handle this, but something as simple as creating an asynchronous thread of execution to compute and then push your value to the back of the storage vector would be good enough.

Databinding with a large amount of values and getter methods?

Reading through Misko's excellent answer on databinding here: How does data binding work in AngularJS?, I am wondering how Angular does it's dirt-checking behind the scenes, because:
I'm creating an app, that prints a large amount of Car objects to the DOM, each Car looking something like this:
var Car = function(settings) {
this.name = settings.name;
+ many more properties...
}
Car.prototype = {
calcPrice: function() { ... },
+ many more methods...
}
$scope.cars = [lots of Cars];
The linked answer above mentions a limit of around 2000 values that can be provided through databinding when printed in the DOM, and due to the large amount of properties on each Car object, this number could very easily be exceeded in this app when looping through the cars array.
Say you end up having 2000+ values printed in the DOM through databinding, and one of these values updates, does it affect Angular's dirt-checking performance that 2000 values are present, or does Angular somehow flag the values that change, so it only looks at the changed values when running its $digest()? In other words, does it matter that you have a lot of databound values, when only a very small number of these are likely to be updated after the initial print?
If it does matter, -- and since most of the values are read-only -- is there some way to use the databinding syntax {{car.prop}} to get the value to the DOM once and then tell Angular to not bind to them anymore
Would it make a difference to add getter-methods to the Car object and provide it's properties like this {{car.getProp()}} ?
I had the same kind of problem with an application I was working on. Having a huge data set is not a problem, the problem comes from the bindings,ng-repeats in particular killed performances.
Part of the solution was removing "dynamic" bindings with "static" bindings using this nice library: http://ngmodules.org/modules/abourget-angular.

First-Class Citizen

The definition of first-class citizen found in the wiki article says:
An object is first-class when it
can be stored in variables and data structures
can be passed as a parameter to a subroutine
can be returned as the result of a subroutine
can be constructed at run-time
has intrinsic identity (independent of any given name)
Can someone please explain/elaborate on the 5th requirement (in bold)? I feel that the article should have provided more details as in what sense "intrinsic identity" is capturing.
Perhaps we could use functions in Javascript and functions in C in our discussion to illustrate the 5th bullet.
I believe functions in C are second-class, whereas functions are first-class in Javascript because we can do something like the following in Javascript:
var foo = function () { console.log("Hello world"); };
, which is not permitted in C.
Again, my question is really on the 5th bullet (requirement).
Intrinsic identity is pretty simple, conceptually. If a thing has it, its identity does not depend on something external to that thing. It can be aliased, referenced, renamed, what-have-you, but it still maintains whatever that "identity" is. People (most of them, anyway) have intrinsic identity. You are you, no matter what your name is, or where you live, or what physical transformations you may have suffered in life.
An electron, on the other hand, has no intrinsic identity. Perhaps introducing quantum mechanics here just confuses the issue, but I think it's a really fantastic example. There's no way to "tag" or "label" an electron such that we could tell the difference between it and a neighbor. If you replace one electron with another, there is absolutely no way to distinguish the old one from the new one.
Back to computers: an example of "intrinsic identity" might be the value returned by Object#hashCode() in Java, or whatever mechanism a JavaScript engine uses that permits this statement to be false:
{} === {} // false
but this to be true:
function foo () {}
var bar = foo;
var baz = bar;
baz === foo; // true

Law of Demeter and OOP confusion

I've been doing some reading recently and have encountered the Law of Demeter. Now some of what I've read makes perfect sense e.g. the paperboy should never be able to rifle through a customers pocket, grab the wallet and take the money out. The wallet is something the customer should have control of, not the paperboy.
What throws me about the law, maybe I'm just misunderstanding the whole thing, is that stringing properties together with a heirarchy of functionality/information can be so useful. e.g. .NETs HTTPContext class.
Wouldn't code such as :
If DataTable.Columns.Count >= 0 Then
DataTable.Columns(0).Caption = "Something"
End If
Or
Dim strUserPlatform as string = HttpContext.Current.Request.Browser.Platform.ToString()
Or
If NewTerm.StartDate >= NewTerm.AcademicYear.StartDate And
NewTerm.EndDate <= NewTerm.AcademicYear.EndDate Then
' Valid, subject to further tests.
Else
' Not valid.
End If
be breaking this law? I thought (perhaps mistakenly) the point of OOP was in part to provide access to related classes in a nice heirarchical structure.
I like, for example, the idea of referencing a utility toolkit that can be used by page classes to avoid repetitive tasks, such as sending emails and encapsulating useful string methods:
Dim strUserInput As String = "London, Paris, New York"
For Each strSearchTerm In Tools.StringManipulation.GetListOfString(strUserInput, ",")
Dim ThisItem As New SearchTerm
ThisItem.Text = strSearchTerm
Next
Any clarity would be great...at the moment I can't reconcile how the law seems to banish stringing properties and methods together...it seems strange to me that so much power should be disregarded? I'm pretty new to OOP as some of you might have guessed, so please go easy :)
What the Law of Demeter (also "Law of Demeter for Functions/Methods") wants to reduce with saying "only use one dot" is that in a method you shouldn't have to assume such a lot of context from the provided arguments. This increases the dependency of the class and makes it less testable.
It doesn't mean that you can't use all of the above examples but it suggests that instead of giving your method the customer which then accesses the wallet and retrieves the money from it:
function getPayment(Customer customer)
{
Money payment = customer.leftpocket.getWallet().getPayment(100);
...
// do stuff with the payment
}
that you instead only pass what the paperboy needs to the method and as such reduce the dependency for the method if it's possible:
function getPayment(Money money)
{
// do stuff with the payment
}
Your benefit from it will be that you dont depend on the customer to have the wallet in the left pocket but instead just process the money the customer gives you. It's a decision you have to base on your individual case though. Less dependencies allow you to test easier.
I think applying the Law of Demeter to individual classes is taking it a bit too far. I think a better application is to apply it to layers in your code. For example, your business logic layer shouldn't need to access anything about the HTTP context, and your data access layer shouldn't need to access anything in the presentation layer.
Yes, it's usually good practice to design the interface of your object so that you don't have to do tons of property chaining, but imagine the hideously complex interface you'd have if you tried to do that to the DataTable and HttpContext classes you gave as examples.
The law doesn't say that you shouldn't have access to any information at all in a class, but that you should only have access to information in a way that you can't easily misuse it.
You can for example not add columns in a data table by assigning anything to the Count property:
DataTable.Columns.Count = 42;
Instead you use the Add method of the Columns object that lets you add a column in a way that all the needed information about the column is there, and also so that the data table get set up with data for that column.

Resources