Closed. This question is opinion-based. It is not currently accepting answers.
Closed last month.
The community reviewed whether to reopen this question 11 days ago and left it closed:
Original close reason(s) were not resolved
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Redux-Saga, a Redux side effect manager, is said to be deprecated, and no longer being maintained.
Yet, over 1 million developers download this NPM package weekly, regardless of the fact that the last 1.1.3 version of Redux-Saga was published almost 3 years ago.
What issues might I face if I keep on using Redux-Saga for the long term, even if it is no longer being maintained by its authors?
I'm a Redux maintainer.
Today, we specifically recommend against using sagas in almost all use cases!
To be clear: Sagas are a great power tool, like a chainsaw. If you really need that power, then having that tool is important. But most of the time, you don't need a chainsaw on a daily basis.
I actually just gave a talk on this specific topic:
Reactathon 2022: The Evolution of Redux Async Logic
In that talk I described different techniques for dealing with async logic and side effects in Redux apps, and gave our set of recommendations for what you should use today. I'll paste in the last slide here for reference:
Our Recommendations Today
What use case are you trying to solve?
Data Fetching
Use RTK Query as the default approach for data fetching and caching
If RTKQ doesn't fully fit for some reason, use createAsyncThunk
Only fall back to handwritten thunks if nothing else works
Don't use sagas or observables for data fetching!
Reacting to Actions / State Changes, Async Workflows
Use the RTK "listener" middleware as the default for responding to store updates and writing long-running async workflows
Only use sagas / observables in the very rare situation that listeners don't solve your use case well enough
Logic with State Access
Use thunks for complex sync and moderate async logic, including access to getState and dispatching multiple actions
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Clearly I don't understand the big deal about "asynchronous" environments (such as NodeJS) versus "synchronous" ones.
Let's say you're trapped in a synchronous environment. Can't your main loop just say:
while(1) {
events << check_for_stuff_from_the_outside_world();
for e in events {e.process()}
}
What's wrong with doing that, how is that not an asynchronous environment, how are asynchronous environments different?
Yes, this is more or less what Node.js does, except that instead of check_for_stuff_from_the_outside_world(), it should really be check_for_stuff_from_the_outside_world_plus_follow_on_stuff_from_previous_events(); and all of your events must also be written in such a way that, instead of completing their processing, they simply do a chunk of their work and then call register_stuff_for_follow_up(follow_on_event). In other words, you actually have to write all of your code to interact with this event framework; it can't be done "transparently", with only the main loop having to worry about it.
That's a big part of why Node.js is JavaScript; most languages have pre-existing standard libraries (for I/O and so on) that aren't built on top of asynchronous frameworks. JavaScript is relatively unusual in expecting each hosting environment to supply a library that's appropriate for its own purposes (e.g., the "standard library" of browser JS might have almost nothing in common with the "standard library" of a command-line JS environment such as SpiderMonkey), which gave Node.js the flexibility to design libraries that worked together with its event loop.
Take a look at the example on the Wikipedia page:
https://en.wikipedia.org/wiki/Nodejs#Examples
Notice how the code is really focused on the functionality of the server - what it should do. Node.js basically says, "give me a funciton for what you want to do when stuff arrives from the network, and we'll call it when stuff arrives from the network" so you're relieved of having to write all the code to deal with managing network connections, etc.
If you've ever written network code by hand, you know that you end up writing the same stuff over and over again, but it's also non-trivial code (in both size and complexity) if you're trying to make it professional quality, robust, highly performant, and scalable... (This is the hidden complexity of check_for_stuff_from_the_outside_world() that everyone keeps refering to.) So Node.js takes the responsibility for doing all of that for you (including hadling the HTTP protocol, if you're using HTTP) and you only need to write your server logic.
So it's not that asynchronous is better, per se. It just hapens to be the natural model to fit the functionality they're providing.
You'll see the asynchronous model come up in a lot of other places too: event-based programming (which is used in a lot of GUI stuff), RPC servers (e.g., Thrift), REST servers, just to name a few... and of course, asynchronous I/O. ;)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to figure out when using user stories is appropriate. Always or not?
For an example, think about a team starting to work on something from scratch, say a movie ticket reservation service. It's easy to come up with user stories for the functionality, like:
"As an end-user I want to be able to browse the movies running in theater X" and so on.
But before those can be implemented, the system needs to be designed: Architecture must be designed, database must be designed, technologies chosen for the GUI and business logic.
How should these tasks appear in the backlog? Should they be user stories as well? If so, how do they comply with the INVEST mnemonic? They don't alone deliver anything for the end-user, but nevertheless they are needed before any feature can be implemented.
But before those can be implemented, the system needs to be designed: Architecture must be designed, database must be designed, technologies chosen for the GUI and business logic.
Not really agree with it. Since a story is a feature which takes almost every layer of your architecture implementing the story builds up the architecture same time. Check up Alistair Cockburn's Walking Skeleton definition.
About the question
Most of the stories you may define as "As a user..." as a feature the story may has UI work as well. So to make it clear you may split up the story into subtasks.
Although some work would be hard to present in INVEST user stories. For instance bugs, tech. dept and so on. They still be presented as stories of a special type(Bugs, tech stories). you couldn't show them on Demo however you may mention about.
(...) before those can be implemented, the system needs to be designed: Architecture must be designed, database must be designed, technologies chosen for the GUI and business logic. (...)
Not exactly. E.g., you don't need to get the entire database designed for implementing functionalities for a sprint, a specific release or whatever given time. What you may need is some common ground.
This is where one of the Agile's beauties lives (vs. waterfall), welcoming change.
Now, answering your question: realize that the role in a user story is not necessary a role of the end customer. Could be your developers, your sysadmins, etc. As such:
AS A server administrator,
I WANT to upgrade our webserver
SO THAT it will handle better the memory consumption
So, you could ask convince your P.O. to add or prioritize in the backlog an user story (or several) for building up some ground for the future development. But, again, when creating such stories remember the Agile value of Responding to change.
P.S.
It's also important to keep the Product Backlog clear and accessible, and provide properly interaction between P.O. and Development Team. This should be guided by the Scrum Master.
This way the team could give better feedback/warn the P.O., in a technical perspective, how one story affect each other and why should story X should be done before story Y.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I know how to measure module implementation using LOC, SLOC or any other metrics.
But I would like to know: is it possible to "measure" several different APIs in order to find "the best one"?
For example:
Nucleus RTOS:
STATUS NU_Create_Semaphore(NU_SEMAPHORE *semaphore, CHAR *name, UNSIGNED initial_count, OPTION suspend_type);
Posix: int sem_init(sem_t *sem, int pshared, unsigned int value);
For example we can state that creation of semaphore in Nucleus OS shall use more stack than Posix variant. So can we conclude that in this case Posix is better API if we use "size of stack" as "measure"?
Or this analysis is just stupid?
And if above is not stupid then I am wondering more: it is "easy" to measure API which cover same functions (create sempahore, create threads etc) but how to measure APIs which provide same functionality where functions are not "equal"?
I can imagine test which will have same functionality made using different API.
After comparing several such created tests by different metrics (memory consumption, LOC, SLOC etc) can I conclude that one API is better then other?
TIA
Find the API that provides you with the capabilities / functions you need
From those, use the ones that are the simplest.
Long term, simplicity and maintainability are far more important than performance, especially if this API is not from an app-local library but a remote service.
Really depends on your Judging criteria.
best option is to list out all the available API's for your required functionality
fix up a judging criteria, which can be based on your requirement
it can be time complexity (order of function),Space Complexity, ease of Use, Your understanding of API, Re usability in other modules or whatever is your application requirement and based on it make a judgement.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We use scrum with our development, we often create task/ticket for developer, and I want to find a way
to record them. But I hava a refused question, that is one way to record them. one way is write on
whiteboards, the other way is write on Agile project management tool(Pivotal Tracker), I think they
are duplicateļ¼ so which is better?
It depends who cares about the tasks.
In teams very new to Scrum, devs can split stories in to tasks to get a better idea of estimates, collaborate on work, etc. For this reason, whatever the devs prefer should be the way forward. Usually a dev will prefer to put tasks on a card, or a whiteboard, or something close to the workspace, but some devs do prefer electronic systems. I find the act of moving a card or writing on a board gives a sense of commitment to a task or story, so I prefer this.
Sometimes the PM prefers to have the tasks so that he can see if a story is 65% done, etc.
Every single time I've seen this it ends up with the PM telling the devs off for not finishing their stories when they said they would, or saying, "It was 85% done yesterday! How can you not have finished it?" This happens a lot with new teams, where devs often prefer to do the easy bits first, or they don't know how to integrate their work with others' yet.
The thing is, there is no value whatsoever in the tasks! It's only possible to get useful feedback by delivering the stories, even if they don't represent completed features but just slices through the system. The tasks themselves are only valuable for the iteration until the stories are completed, so no historic record is needed. PMs who value the tasks often end up with part-done stories and nothing to release or showcase.
For this reason, I would try not to duplicate the tasks for my recording efforts, but just to let the devs make the tasks themselves and put them wherever they want to. It's easy enough to count tasks manually for a burn-down.
I'd have to disagree with the previous answer of there not being any value in the tasks. I myself prefer the electronic methods such as:
- Calenders : Not only do they say what needs to be done but also when and how long it might take
- Task List : Just like the traditional todo list.
- Scope Items : Turning the items in the scope spreadsheet into deliverables.
Having physical tasks on cards (tried that) or on the whiteboard in the LLP (did that for a while) is technically better, because you're able to always get to the information quickly. However if your development team is distributed, especially when then PM is in another part of the world, you're going to end up having to duplicate data electronically. The tasks themselves add value to the development house in that they provide good historical data about how long certain tasks take. This information is extremely valuable in building the Scope Matrix of future projects, and as such affect the costing and delivery time. As a side benefit, you'll be able to see by historical trend which asset (i.e. developer) is able to perform and at what efficiency. E.g. If you give a developer a Database task to do and they were inefficient then you'll know next time that database tasks should either be given to someone else or during the down time between projects, said asset should spend time upgrading the database skills.
So important is historical task recording that sometimes clients will ask to see the tasks and how long they took as verification of "the bill". When clients are charged by the development house's hourly rate for work, they want accountability for every hour (or part there of) spent. We used to fill out these sheets with the tasks and the durations to send along with the invoice to the client; and sometimes they would question it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
In Scrum, it is obvious that we could produce a demo after each sprint.
I don't know how to produce demos in Kanban since it doesn't has the sprint concept (I may be wrong).
Would you please enlighten me regarding how to make releases in Kanban?
Thanks for help and time.
When we were implementing Kanban at my last job, the releases went one of three ways:
Release every two weeks on a schedule.
If enough sticky notes end up in the "done" bucket on the board to merit an out-of-cycle release, notify the business unit that we're releasing so we can prevent getting too out of sync.
The business unit requires an out-of-cycle release for a specific feature of set of features that are needed immediately.
It was pretty open-ended, really.
Kanban says how to manage the flow of work and limit work in progress, it doesn't say anything about the frequency of releases as such. However, it is quite demanding because it demands that a working integrated version of the product be kept at all times with new features added as soon as they are considered complete (done, last column on the board).
A concept that is frequently used is that there is a "cadence" - a regular interval when this "ready product" is taken and actually deployed to the live system/shipped.
However, I think that one concept that is very clear in Scrum may also help here. In Scrum it is clearly said that Scrum calls for a "shippable product increment" (confirming to the definition of DONE) at the end of each sprint. Whether to actually ship it / deploy it is out of scope of the development process, because it is ultimately a business decision. Same I think applies to Kanban, a ready, integrated product is available at all times, whether to actually use it as a business decision which is outside of the scope of the development process and its management.
There is no single definition. Usually in Kanban we add MMFs (Minimal Marketable Features) which, by definition, means that every feature should add value to the customer, thus you should be able to release every feature independently.
This doesn't mean you have to release each feature separately, so you will find whole range of approaches (David mentions a few of them). I find it a common case that Kanban team release more often than they would if they followed one of time-boxed approaches.
Demos in Kanban are optional but if the client is willing to have them you can demo features as you deploy even if you release every feature independently. In theory every feature should add value so this approach should work well.
We make a demo a condition of moving a feature from "Testing" to "Ready for Release". So it's feature-by-feature rather than sprint-by-sprint, and the nature of the feature will determine the nature of the demo. The greater the business involvement during development the less of an issue this becomes anyway.
You can try adding a sign-off step to your DOD where you may arrange a quick demo. But the difference would be, it will be an one-to-one demo whereas in scrum sprint review, the demo is for all the attendees.
Regarding the release cycle, its already mentioned in previous answers. I would like to add one more point, you may have a limit for yet to release items. For example, if you have 10 MMFs are in the board ready-to-be-released then release process can be kicked off then and there.
This method may help you to track down throughput in a way.