OptaPlanner: Is there any intrinsic performance benefit in using Bi-/Tri-/QuadConstraintStream over UniConstraintStream with a home grown tuple? - constraints

Is there any intrinsic performance benefit in using Bi-/Tri-/QuadConstraintStream over UniConstraintStream with a home grown tuple?
Is there for example any caching or hashing that OptaPlanner can perform behind the scenes if I use a BiConstraintStream and "OptaPlanner tuples" instead of a UniConstraintStream with my own tuple?

That is hard to say with confidence; performance questions in Java are best answered with benchmarks. In the absence of a specific problem to benchmark, I'm going to use some common sense to perhaps approximate the correct answer.
There are really only two cases I can think of where stream cardinality can be reduced:
map(...) building block
or groupBy(...).
There are also two cases where stream cardinality can be increased:
join(...),
and groupBy(...).
(Side note: In the future, map(...) will probably be overloaded to also allow cardinality increases.)
Assuming you only ever decrease cardinality after you've done your joins, I don't think there would be a performance penalty for a one-time cardinality decrease at the end of the stream. If, on the other hand, you had a mix of cardinality decreasing and increasing operations, the perpetual creation and destruction of your custom tuple instances could considerably increase GC pressure and therefore give a performance penalty.
But even in the most simple case of no cardinality increases at all, I think that re-mapping to a single tuple still is a bad idea for several reasons:
The GC pressure argument doesn't really go away. Your tuple instances are still created and then thrown away, as they seem to be nothing but carrier objects.
Custom tuple instances also bring indirection. What used to be accessed as variable x will now be accessed as instance field tuple.x or via a method call tuple.getX(). This overhead is very small, but it is measurable if the operation is performed often enough.
I think that this indirection would also result in a code that is worse to read. But that may be just my personal preference.
Finally, and perhaps most importantly, what would even be the point of introducing this middleman data carrier? map(...) only exists so that a theoretical use case for penta-streams can be enabled via these carrier objects; other than that, this pattern should be entirely unnecessary. (And we have not yet seen anyone actually requiring penta streams.)

Everything Lukas said. Benchmarks with JFC/JMC allocation profiles for a trustworthy answer. My guesstimate:
No, currently, there probably is no intrinsic benefit performance wise over a home grown tuple. And if there is any, it's probably a rounding error. That might or might not change in future versions.
In fact, if your using records for your home grown tuples, those might be even faster (OptaPlanner can't do that until it's minimum java 17).
It's ok to use a home grown tuple with a UniStream if you prefer that over BiStream without it.
But you might loose ease-of-use, I suspect. That's one of the reasons Bi, Tri, Quad streams etc were designed: easier expressiveness, so easier to read, so easier for maintenance (especially when the business requirements change).

Related

What is the need for immutable/persistent data structures in erlang

Each Erlang process maintains its own private address space. All communication happens via copying without sharing (except big binaries). If each process is processing one message at a time with no concurrent access over its objects, I don't see why do we need immutable/persistent data structures.
Erlang was initially implemented in Prolog, which doesn't really use mutable data structures either (though some dialects do). So it started off without them. This makes runtime implementation simpler and faster (garbage collection in particular).
So adding mutable data structures would require a lot of effort, could introduce bugs, and Erlang programmers are nearly by definition at least willing to live without them.
Many actually consider their absence to be a positive good: less concern about object identity, no need for defensive copying because you don't know whether some other piece of code is going to modify the data you passed (or might be changed later to modify it), etc.
This absence does mean that Erlang is pretty unusable in some domains (e.g. high performance scientific computing), at least as the main language. But again, this means that nobody in these domains is going to use Erlang in the first place and so there's no particular incentive to make it usable at the cost of making existing users unhappy.
I remember seeing a mailing list post by Joe Armstrong quite a long time ago (which I couldn't find with a quick search now) saying that he initially planned to add mutable variables when he'd need them... except he never quite did, and performance was good enough for everything he was using Erlang for.
It is indeed the case that in Erlang immutability does not solve any "shared state" problems, as immutable data are "process local".
From the functional programming language perspective, however, immutability offers a number of benefits, summarized adequately in this Quora answer:
The simplest definition of functional programming is that it’s a programming
paradigm where you are transforming immutable data with functions.
The definition uses functions in the mathematical sense, where it’s
something that takes an input, and produces an output.
OO + mutability tends to violate that definition because when you want
to change a piece of data it generally will not return the output, it
will likely return void or unit, and that when you call a method on
the object the object itself isn’t input for the function.
As far as what advantages the paradigm has, composability, thread
safety, being able to track what went wrong where better, the ability
to sort of separate the data from the actual computation on it being
done, etc.
how would this work?
factorial(1) -> 1;
factorial(X) ->
X*factorial(X-1).
if you run factorial(4), a single process will be running the same function. Each time the function will have it's own value of X, if the value of X was in the scope of the process and not the function recursive functions wouldn't work. So first we need to understand scope. If you want to say that you don't see why data needs to be immutable within the scope of a single function/block you would have a point, but it would be a headache to think about where data is immutable and where it isn't.

Getting large number of entities from datastore

By this question, I am able to store large number (>50k) of entities in datastore. Now I want to access all of it in my application. I have to perform mathematical operations on it. It always time out. One way is to use TaskQueue again but it will be asynchronous job. I need a way to access these 50k+ entities in my application and process them without getting time out.
Part of the accepted answer to your original question may still apply, for example a manually scaled instance with 24h deadline. Or a VM instance. For a price, of course.
Some speedup may be achieved by using memcache.
Side note: depending on the size of your entities you may need to keep an eye on the instance memory usage as well.
Another possibility would be to switch to a faster instance class (and with more memory as well, but also with extra costs).
But all such improvements might still not be enough. The best approach would still be to give your entity data processing algorithm a deeper thought - to make it scalable.
I'm having a hard time imagining a computation so monolithic that can't be broken into smaller pieces which wouldn't need all the data at once. I'm almost certain there has to be some way of using some partial computations, maybe with storing some partial results so that you can split the problem and allow it to be handled in smaller pieces in multiple requests.
As an extreme (academic) example think about CPUs doing pretty much any super-complex computation fundamentally with just sequences of simple, short operations on a small set of registers - it's all about how to orchestrate them.
Here's a nice article describing a drastic reduction of the overall duration of a computation (no clue if it's anything like yours) by using a nice approach (also interesting because it's using the GAE Pipeline API).
If you post your code you might get some more specific advice.

How to identify that code is over abstracted?

What should be the measures that should be used to identify that code is over abstracted and very hard to understand and what should be done to reduce over abstraction?
"Simplicity over complexity, complexity over complicatedness"
So - there's a benefit to abstract something only if You are "de-leveling" complicatedness to complexity. Reasons to do that can vary: better modularity, better encapsulation etc.
Identifying over abstraction is a chicken and egg problem. In order to reduce over abstraction You need to understand actual reason behind code lines. That includes understanding idea of particular abstraction itself (in contrast to calling it over abstracted cause of lack of understanding). And that's not enough - You need to know a better, simpler solution to prove that it's over abstracted.
If You are looking for tool that could do it in Your place - look no more, only mind can reliably judge that.
I will give an answer that will get a LOT of down votes!
If the code is written in an OO language .. it is necessarily heavily over-abstracted. The purer the language the worse the problem.
Abstraction should be used with great caution. If in doubt always use concrete data structures. (You can always abstract later, this is easier than de-abstraction :)
You must be very certain you have the right abstraction in your current context, and you must be very sure that concept will stand the test of change. Abstraction has a high price in performance of both the code and the coder.
Some weak tests for over-abstraction: if the data structure is a product type (struct in C) and the programmer has written get and set method for each field, they have utterly failed to provide any real abstraction, disabled operators like C increment, for no purpose, and simply not understood that the struct field names are already the abstract representation of a product. Duplicating and laming up the interface is not a good idea.
A good test for the product case is whether there exist any data invariants to maintain. For example a pair of integers representing a rational number is almost sufficient, there's little need for any abstraction because all pairs are valid except when the denominator is zero. However for performance reasons one may choose to maintain an invariant, typically the denominator is required to be greater than zero, and the numerator and denominator are relatively prime. To ensure the invariant, the product representation is encapsulated: the initial value protected by a constructor and methods constrained to maintain the invariant.
To fix code I recommend these steps:
Document the representation invariants the abstraction is maintaining
Remove the abstraction (methods) if you can't find strong invariants
Rewrite code using the method to access the data directly.
This procedure only works for low level abstraction, i.e. abstraction of small values by classes.
Over abstraction at a higher level is much harder to deal with. Ideally you'd refactor the code repeatedly, checking to see after each step it continues to work. However this will be hard, and sometimes a major rewrite is required, rather than a refinement. It's probably not worth it unless the abstraction is so far off base it is not tenable to continue to maintain it.
Download Magento and have a look at the code, read some documents on it and have a look at their ERD: http://www.magentocommerce.com/wiki/_media/doc/magento---sample_database_diagram.png?cache=cache
I'm not joking, this is over-abstraction.. trying to please everyone and cover every base is a terrible idea and makes life extremely difficult for everyone.
Personally I would say that "What is the ideal level of abstraction?" is a subjective question.
I don't like code that uses a new line for every atomic operation, but I also don't like 10 nested operations within one line.
I like the use of recursive functions, but I don't appreciate recursion for the sole sake of recursion.
I like generics, but I don't like (nested) generic functions that e.g. use different code for each specific type that's expected...
It is a matter of personal opinion as well as common sense. Does this answer your question?
I completely agree with what #ArnisLapsa wrote:
"Simplicity over complexity, complexity over complicatedness"
And that
an abstraction is used to "de-level" those, from complicated to complex
(and from complex to simpler)
Also, as stated by #MartinHemmings a good abstraction is quite subjective because we don't all think the same way. And actually our way of thinking change with time. So Something that someone find simple might looks complex to others, and even become simpler with more experiences. Eg. A monadic operation is something trivial for functional programmer, but can be seriously confusing for others. Similarly, a design with mutable object communicating with each other can be natural for some and feel un-trackable for others.
That being said, I would like to add a couple of indicators. Note that this applies to abstractions used in code-base, not "paradigm abstraction" such as everything-is-a-function, or everything-is-designed-as-objects. So:
To the people it concerns, the abstraction should be conceptually simpler than other alternatives, without looking at the implementation. If you find that thinking of all possible cases is simpler that reasoning using the abstraction, then this abstraction is not suitable (for you)
Its implementation should reason only about the abstraction, not the specific cases that it will be used for. As soon as the abstraction implementation has parts made for specific cases, it indicates an "unfit" abstraction. And increasing generalization to cope with each new case, is going the wrong way (and tends to fall to the next issue).
A very common indicator of over-abstraction I have found (and actually fell for) are abstractions that represent more than what is needed, now. As much as possible, they should allow to do exactly what is required, but nothing more. For example, say you're thinking of, or already have, a "2d point" abstraction for which you can define many operators you need. Then you have another need that could really be a "4d point" similar to the 2d. Don't start to use a "Ndimensionnal point" abstraction, especially thinking that you might later need it. Maybe you'll never have anything else than 2 and 4d (because it stays as "a good idea" in the backlog forever) but instead some requirements pops to convert 4d points into pairs of 2d points. That's going to be hard to generalize to n-dimensions. So, each abstraction can be checked to cover and only cover the actual needs. In my point example, the complexity "n-dimensional" is actually only used to cope with the 2 and 4d cases (and the 4d might not even be used that much).
Finally, in a more global point of view, a code-base that has many not related abstractions, is an indicator that the dev team tends to abstract every little issues. So probably many of them are or became over-abstracted.

Immutable game object, basic functional programming question

I'm in the process of trying to 'learn more of' and 'learn lessons from' functional programming and the idea of immutability being good for concurrency, etc.
As a thought exercise I imagined a simple game where Mario-esq type character can run and jump around with enemies that shoot at him...
Then I tried to imagine this being written functionally using immutable objects.
This raised some questions that puzzled me (being an Imperative OO programmer).
1) If my little guy at position x10,y100 moves right 1 unit do I just re-instantiate him using his old values with a +1 to his x position (e.g x11,y100)?
2) (If my first assumption is correct)
If my input thread moves little guy right 1 unit and my enemy AI thread shoots little guy and enemy-ai-thread resolves before input-thread then my guy will loose health, then upon input thread resolving, gain it back and move right ...
Does this mean I can't fire-&-forget my threads even with immutability?
Do I need to send my threads off to do their thing then new()up little guy synchronously when I have the results of both threaded operations? or is there a simple 'functional' solution?
This is a slightly different threading problem than I face on a day to day basis.
Usually I have to decide if I care about what order threads resolve in or not. Where as in the above case I technically don't care if he takes damage or moves first. But I do care if race conditions during instantiation cause one threads data to be totally lost.
3) (Again if my first assumption is correct) Does constantly instantiating new instances of an object (e.g Mario guy) have a horrible overhead that makes it a very serious/important design decision ?
EDIT
Sorry for this additional edit, I wasn't what good practice is on here about follow up questions...
4) If immutability is something I should strive for and even jump though hoops of instantiating new versions of objects that have changed...And If I instantiate my guy every time he moves (only with a different position) don't I have exactly the same problems as I would if he was mutable? in as much that something that referenced him at one point in time is actually looking at old values?.. The more I dig into this the more my head's spinning as generating new versions of the same thing with differing values just seems like mutability, via hack. :¬?
I guess my question is: How should this work? and how is it beneficial over just mutating his position?
for(ever)//simplified game-loop update or "tick" method
{
if(Keyboard.IsDown(Key.Right)
guy = new Guy(guy){location = new Point(guy.Location.x +1, guy.Location.y)};
}
Also confusing is: The above code means that guy is mutable!(even if his properties are not)
4.5) Is that at all possible with a totally immutable guy?
Thanks,
J.
A couple comments on your points:
1) Yes, maybe. To reduce overhead, a practical design will probably end up sharing a lot of state between these instances. For example, perhaps your little guy has an "Equipment" structure which is also immutable. The new copy and the old copy can reference the same "equipment" structure safely, since it's immutable; so you only have to copy a reference, not the whole thing. This is an common advantage you only get thanks to immutability -- if "equipment" was mutable, you couldn't share the reference, since if it changed, your "old" version would change too.
2) In a game, the most practical solution to this issue would probably be to have a global "clock" and have this sort of processing happen once, at a clock tick. Note that your exact scenario would still be a problem if you didn't write it in a functional style: Suppose H0 is the health at time T. If you passed H0 to a function which made a decision about health at time T, you took damage at time T+1, and then the function returned at time T+5, it might have made the wrong decision based on your current health.
3) In a language that encourages functional programming, object instantiation is often made as cheap as possible. I know that on the JVM, creating small objects on the heap is so fast that it's rarely a performance consideration in any practical situation at all, and in C# I've never encountered a situation where it was a concern either.
If my little guy at position
x10,y100 moves right 1 unit do I just
re-instantiate him using his old
values with a +1 to his x position
(e.g x11,y100)?
Well, not necessarily. You could instantiate the guy once, and change its position during play. You may model this with agents. The guy is an agent, so is the AI, so is the render thread, so is the user.
When the AI shoots the guy, it sends it a message, when the user presses an arrow key that sends another message and so on.
let guyAgent (guy, position, health) =
let messages = receiveMessages()
let (newPosition, newHealth) = process(messages)
sendMessage(renderer, (guy, newPosition, newHealth))
guyAgent (guy, newPosition, newHealth)
"Everything" is immutable now (actually, under the hood the agent's dipatch queue does have some mutable state probably).
If immutability is something I
should strive for and even jump though
hoops of instantiating new versions of
objects that have changed...And If I
instantiate my guy every time he moves
(only with a different position) don't
I have exactly the same problems as I
would if he was mutable?
Well, yes. Looping with mutable values and recurring with immutable ones is equivalent.
Edit:
For agents, the wiki is always helpful.
Luca Bolognese has an F# implementation of agents.
This book (called by some The Intelligent Agent Book), though targeting the AI applications (instead of having a SW engineering point of view) is excellent.
If everything in the global system state, outside the current stack frame, is immutable, unless gives another thread a reference to something on the stack (VERY DANGEROUS) there won't be any way for a threads to do anything to affect each other. You could fire and forget, or simply not bother firing in the first place, and the effect would be the same.
Assuming there are some parts of the global state that are mutable, one useful pattern is:
Do
Latch a mutable reference to an immutable object
Generate a new object based upon the latched reference
Loop While CompareExchange fails.
The compare exchange should update the mutable reference to the new one if it still points to old one. This avoids the overhead of locking if there is no concurrent access, but may perform worse than locking if many threads are try to update the same object and generating a new instance from the latched one is slow. One advantage of this approach is that there is no danger of deadlock, though in some situations liveLock could occur.
Another functional approach to this sort of problem is to take a step back and separate out the idea of state from the idea of your little guy.
Your state will include your little guy's position, as well as the position of your baddy and it's shot, and then you have some functions that take some or all of the state and do things like generating the next state and drawing the screen.
The timing issues you're talking about when things you want to parallelize depend on each other are real problems that won't magically go away, although the solutions may be more or less convenient in different languages.
Several suggestions have already been made, and there are a variety of concurrency solutions. The central clock and agents would work, as would Software Transactional Memory, Mutexes or CSP (go style channels), and probably others. The best approach is going to depend on the specifics of the problem, and to a certain extent on personal taste.
As for the head-spinning, try not to get too caught up in whether a thing is changing or not. The point of immutability is not that things don't change, it's that you can create pure functions so that your program is easier to reason about.
For example, an OO program might have a drawing function that iterates over all the objects in a scene, and asks them all to draw themselves, where a functional program might have a function that takes a state and draws a frame.
The end result would be the same scene, but the way the logic and the state is organised is very different.
I, for one, find that it's much easier to work on when you have all the data over here, in one big input lump, and all the drawing logic there, encapsulated in some functions. There are some pretty clear architectural wins too - serialization, testing, and swapping out front ends gets a lot easier with this sort of structure.
Not everything in your program should be immutable. A player's position is something you would expect to be mutable. His name, maybe not.
Immutability is good, but you should perhaps rethink your approach to use more concurrent solutions than simple "immutable"ize everything. Consider this
Thread AI gets copy of your position
You move three units to the left.
AI shoots you based on your old position, and hits... shouldn't happen!
Also, most gaming is done in "game ticks" - there's not much multithreading going on!

Advantages of stateless programming?

I've recently been learning about functional programming (specifically Haskell, but I've gone through tutorials on Lisp and Erlang as well). While I found the concepts very enlightening, I still don't see the practical side of the "no side effects" concept. What are the practical advantages of it? I'm trying to think in the functional mindset, but there are some situations that just seem overly complex without the ability to save state in an easy way (I don't consider Haskell's monads 'easy').
Is it worth continuing to learn Haskell (or another purely functional language) in-depth? Is functional or stateless programming actually more productive than procedural? Is it likely that I will continue to use Haskell or another functional language later, or should I learn it only for the understanding?
I care less about performance than productivity. So I'm mainly asking if I will be more productive in a functional language than a procedural/object-oriented/whatever.
Read Functional Programming in a Nutshell.
There are lots of advantages to stateless programming, not least of which is dramatically multithreaded and concurrent code. To put it bluntly, mutable state is enemy of multithreaded code. If values are immutable by default, programmers don't need to worry about one thread mutating the value of shared state between two threads, so it eliminates a whole class of multithreading bugs related to race conditions. Since there are no race conditions, there's no reason to use locks either, so immutability eliminates another whole class of bugs related to deadlocks as well.
That's the big reason why functional programming matters, and probably the best one for jumping on the functional programming train. There are also lots of other benefits, including simplified debugging (i.e. functions are pure and do not mutate state in other parts of an application), more terse and expressive code, less boilerplate code compared to languages which are heavily dependent on design patterns, and the compiler can more aggressively optimize your code.
The more pieces of your program are stateless, the more ways there are to put pieces together without having anything break. The power of the stateless paradigm lies not in statelessness (or purity) per se, but the ability it gives you to write powerful, reusable functions and combine them.
You can find a good tutorial with lots of examples in John Hughes's paper Why Functional Programming Matters (PDF).
You will be gobs more productive, especially if you pick a functional language that also has algebraic data types and pattern matching (Caml, SML, Haskell).
Many of the other answers have focused on the performance (parallelism) side of functional programming, which I believe is very important. However, you did specifically ask about productivity, as in, can you program the same thing faster in a functional paradigm than in an imperative paradigm.
I actually find (from personal experience) that programming in F# matches the way I think better, and so it's easier. I think that's the biggest difference. I've programmed in both F# and C#, and there's a lot less "fighting the language" in F#, which I love. You don't have to think about the details in F#. Here's a few examples of what I've found I really enjoy.
For example, even though F# is statically typed (all types are resolved at compile time), the type inference figures out what types you have, so you don't have to say it. And if it can't figure it out, it automatically makes your function/class/whatever generic. So you never have to write any generic whatever, it's all automatic. I find that means I'm spending more time thinking about the problem and less how to implement it. In fact, whenever I come back to C#, I find I really miss this type inference, you never realise how distracting it is until you don't need to do it anymore.
Also in F#, instead of writing loops, you call functions. It's a subtle change, but significant, because you don't have to think about the loop construct anymore. For example, here's a piece of code which would go through and match something (I can't remember what, it's from a project Euler puzzle):
let matchingFactors =
factors
|> Seq.filter (fun x -> largestPalindrome % x = 0)
|> Seq.map (fun x -> (x, largestPalindrome / x))
I realise that doing a filter then a map (that's a conversion of each element) in C# would be quite simple, but you have to think at a lower level. Particularly, you'd have to write the loop itself, and have your own explicit if statement, and those kinds of things. Since learning F#, I've realised I've found it easier to code in the functional way, where if you want to filter, you write "filter", and if you want to map, you write "map", instead of implementing each of the details.
I also love the |> operator, which I think separates F# from ocaml, and possibly other functional languages. It's the pipe operator, it lets you "pipe" the output of one expression into the input of another expression. It makes the code follow how I think more. Like in the code snippet above, that's saying, "take the factors sequence, filter it, then map it." It's a very high level of thinking, which you don't get in an imperative programming language because you're so busy writing the loop and if statements. It's the one thing I miss the most whenever I go into another language.
So just in general, even though I can program in both C# and F#, I find it easier to use F# because you can think at a higher level. I would argue that because the smaller details are removed from functional programming (in F# at least), that I am more productive.
Edit: I saw in one of the comments that you asked for an example of "state" in a functional programming language. F# can be written imperatively, so here's a direct example of how you can have mutable state in F#:
let mutable x = 5
for i in 1..10 do
x <- x + i
Consider all the difficult bugs you've spent a long time debugging.
Now, how many of those bugs were due to "unintended interactions" between two separate components of a program? (Nearly all threading bugs have this form: races involving writing shared data, deadlocks, ... Additionally, it is common to find libraries that have some unexpected effect on global state, or read/write the registry/environment, etc.) I would posit that at least 1 in 3 'hard bugs' fall into this category.
Now if you switch to stateless/immutable/pure programming, all those bugs go away. You are presented with some new challenges instead (e.g. when you do want different modules to interact with the environment), but in a language like Haskell, those interactions get explicitly reified into the type system, which means you can just look at the type of a function and reason about the type of interactions it can have with the rest of the program.
That's the big win from 'immutability' IMO. In an ideal world, we'd all design terrific APIs and even when things were mutable, effects would be local and well-documented and 'unexpected' interactions would be kept to a minimum. In the real world, there are lots of APIs that interact with global state in myriad ways, and these are the source of the most pernicious bugs. Aspiring to statelessness is aspiring to be rid of unintended/implicit/behind-the-scenes interactions among components.
One advantage of stateless functions is that they permit precalculation or caching of the function's return values. Even some C compilers allow you to explicitly mark functions as stateless to improve their optimisability. As many others have noted, stateless functions are much easier to parallelise.
But efficiency is not the only concern. A pure function is easier to test and debug since anything that affects it is explicitly stated. And when programming in a functional language, one gets in the habit of making as few functions "dirty" (with I/O, etc.) as possible. Separating out the stateful stuff this way is a good way to design programs, even in not-so-functional languages.
Functional languages can take a while to "get", and it's difficult to explain to someone who hasn't gone through that process. But most people who persist long enough finally realise that the fuss is worth it, even if they don't end up using functional languages much.
Without state, it is very easy to automatically parallelize your code (as CPUs are made with more and more cores this is very important).
Stateless web applications are essential when you start having higher traffic.
There could be plenty of user data that you don't want to store on the client side for security reasons for example. In this case you need to store it server-side. You could use the web applications default session but if you have more than one instance of the application you will need to make sure that each user is always directed to the same instance.
Load balancers often have the ability to have 'sticky sessions' where the load balancer some how knows which server to send the users request to. This is not ideal though, for example it means every time you restart your web application, all connected users will lose their session.
A better approach is to store the session behind the web servers in some sort of data store, these days there are loads of great nosql products available for this (redis, mongo, elasticsearch, memcached). This way the web servers are stateless but you still have state server-side and the availability of this state can be managed by choosing the right datastore setup. These data stores usually have great redundancy so it should almost always be possible to make changes to your web application and even the data store without impacting the users.
My understanding is that FP also has a huge impact on testing. Not having a mutable state will often force you to supply more data to a function than you would have to for a class. There's tradeoffs, but think about how easy it would be to test a function that is "incrementNumberByN" rather than a "Counter" class.
Object
describe("counter", () => {
it("should increment the count by one when 'increment' invoked without
argument", () => {
const counter = new Counter(0)
counter.increment()
expect(counter.count).toBe(1)
})
it("should increment the count by n when 'increment' invoked with
argument", () => {
const counter = new Counter(0)
counter.increment(2)
expect(counter.count).toBe(2)
})
})
functional
describe("incrementNumberBy(startingNumber, increment)", () => {
it("should increment by 1 if n not supplied"){
expect(incrementNumberBy(0)).toBe(1)
}
it("should increment by 1 if n = 1 supplied"){
expect(countBy(0, 1)).toBe(1)
}
})
Since the function has no state and the data going in is more explicit, there are fewer things to focus on when you are trying to figure out why a test might be failing. On the tests for the counter we had to do
const counter = new Counter(0)
counter.increment()
expect(counter.count).toBe(1)
Both of the first two lines contribute to the value of counter.count. In a simple example like this 1 vs 2 lines of potentially problematic code isn't a big deal, but when you deal with a more complex object you might be adding a ton of complexity to your testing as well.
In contrast, when you write a project in a functional language, it nudges you towards keeping fancy algorithms dependent on the data flowing in and out of a particular function, rather than being dependent on the state of your system.
Another way of looking at it would be illustrating the mindset for testing a system in each paradigm.
For Functional Programming: Make sure function A works for given inputs, you make sure function B works with given inputs, make sure C works with given inputs.
For OOP: Make sure Object A's method works given an input argument of X after doing Y and Z to the state of the object. Make sure Object B's method works given an input argument of X after doing W and Y to the state of the object.
The advantages of stateless programming coincide with those goto-free programming, only more so.
Though many descriptions of functional programming emphasize the lack of mutation, the lack of mutation also goes hand in hand with the lack of unconditional control transfers, such as loops. In functional programming languages, recursion, in particularly tail recursion, replaces looping. Recursion eliminates both the unconditional control construct and the mutation of variables in the same stroke. The recursive call binds argument values to parameters, rather than assigning values.
To understand why this is advantageous, rather than turning to functional programming literature, we can consult the 1968 paper by Dijkstra, "Go To Statement Considered Harmful":
"The unbridled use of the go to statement has an immediate consequence that it becomes terribly hard to find a meaningful set of coordinates in which to describe the process progress."
Dijkstra's observations, however still apply to structured programs which avoid go to, because statements like while, if and whatnot are just window dressing on go to! Without using go to, we can still find it impossible to find the coordinates in which to describe the process progress. Dijkstra neglected to observe that bridled go to still has all the same issues.
What this means is that at any given point in the execution of the program, it is not clear how we got there. When we run into a bug, we have to use backwards reasoning: how did we end up in this state? How did we branch into this point of the code? Often it is hard to follow: the trail goes back a few steps and then runs cold due to a vastness of possibilities.
Functional programming gives us the absolute coordinates. We can rely on analytical tools like mathematical induction to understand how the program arrived into a certain situation.
For example, to convince ourselves that a recursive function is correct, we can just verify its base cases, and then understand and check its inductive hypothesis.
If the logic is written as a loop with mutating variables, we need a more complicated set of tools: breaking down the logic into steps with pre- and post-conditions, which we rewrite in terms mathematics that refers to the prior and current values of variables and such. Yes, if the program uses only certain control structures, avoiding go to, then the analysis is somewhat easier. The tools are tailored to the structures: we have a recipe for how we analyze the correctness of an if, while, and other structures.
However, by contrast, in a functional program there is no prior value of any variable to reason about; that whole class of problem has gone away.
Haskel and Prolog are good examples of languages which may be implemented as stateless programming languages. But unfortunately they are not so far. Both Prolog and Haskel have imperative implementations currently. See some SMT's, seem closer to stateless coding.
This is why you are having hard time seeing any benefits from these programing languages. Due to imperative implementations we have no performance and stability benefits. So the lack of stateless languages infrastructure is the main reason you feel no any stateless programming language due to its absence.
These are some benefits of pure stateless:
Task description is the program (compact code)
Stability due to absense of state-dependant bugs (the most of bugs)
Cachable results (a set of inputs always cause same set of outputs)
Distributable computations
Rebaseable to quantum computations
Thin code for multiple overlapping clauses
Allows differentiable programming optimizations
Consistently applying code changes (adding logic breaks nothing written)
Optimized combinatorics (no need to bruteforce enumerations)
Stateless coding is about concentrating on relations between data which then used for computing by deducing it. Basically this is the next level of programming abstraction. It is much closer to native language then any imperative programming languages because it allow describing relations instead of state change sequences.

Resources