After looking on the Internet for a long time, I have not found whether it is possible to add default options to this library as in axios? Do I have to pass the same options to fetchBaseQuery every time I create an Api?
As you should create only one API in your application in almost all cases, there is really no need for "defaults". The fact that you are creating a fetchBaseQuery is already that default.
You should only create multiple apis if those are completely independent datasets. So if you had one api that queried against a facebook service and another one that queried against a cake recipe service, that's fine. But it it's the same database and you create one api for authors and one for books, you are not using it as intended.
This is also mentioned at about 5 different places in the RTK Query docs, for example quoting the Quick Start Tutorial:
Typically, you should only have one API slice per base URL that your application needs to communicate with. For example, if your site fetches data from both /api/posts and /api/users, you would have a single API slice with /api/ as the base URL, and separate endpoint definitions for posts and users. This allows you to effectively take advantage of automated re-fetching by defining tag relationships across endpoints.
For maintainability purposes, you may wish to split up endpoint definitions across multiple files, while still maintaining a single API slice which includes all of these endpoints. See code splitting for how you can use the injectEndpoints property to inject API endpoints from other files into a single API slice definition.
Related
I’m redesigning the REST API for a small SaaS I built. Currently there’s a route /entries that doesn’t require any authentication. However, if the client authenticates with sufficient privileges, the server will send additional information (ex: the account associated with each entry).
The main problem I see with this is that a client attempting to request protected data with insufficient privileges will still receive a 200 response, but without the expected data, instead of a 401 Unauthorized.
The alternatives I came up with are:
Split the endpoint into two endpoints, ex /entries and /admin/entries. The problem with this approach is that there are now two different endpoints for essentially the same resource. However, it has the advantage of being easy to document with OpenAPI. (Additionally, it allows for the addition of a /entries/:id/account endpoint.)
Accept a query parameter ?admin=true. This option is harder to document. On the other hand, it avoids having multiple URIs for a single entry.
Is there a standard way to structure something like this?
Related question: Different RESTful representations of the same resource
The alternatives I came up with are
Note that, as far as HTTP/REST are concerned, your two alternatives are the same: in both cases you are introducing a new resource.
The fact that in one case you use path segments to distinguish the two identifiers and in the other case you are using the query part doesn't change the fact that you have two resources.
Having two resources with the same information is fine - imagine two web pages built from the same information.
It's a trade off - the HTTP application isn't going to know that these resources have common information, and so won't know that invalidating one cached resource should also invalidate the other. So just like with web pages, you can get into situations where the representations that you have in your cache aren't consistent with each other.
Sometimes, the right answer is to use links between different resources - have "the" information in one place, and everywhere else has links that allow you to find that one place. Again, trade-offs.
HTTP isn't an infinitely flexible application protocol. It's really good at transferring documents over a network, especially at "web scale".
There have been attempts at using Link headers to trigger invalidation of other cached resources, but as far as I have been able to tell, none of them has made it past the proposal stage.
I am considering whether I need to store my firebase DocumentReferences in my ngrx store. I'd prefer to not have the request duplicate network requests, but I'd also like to avoid the complexity of storing and retrieving DocRefs from the store.
My assumption would be that #angular/fire would keep a reference to these DocumentRefs somewhere behind the scenes - like a rxjs shareReplay() - but I have not read anything about it. That being said, there is mention of it being:
ngrx friendly - Integrate with ngrx using AngularFire's action based APIs. LINK
There is a fireship tutorial for putting docs in the store, but I'm not sure if it is even necessary.
I'm not entirely certain what you're asking here, but it sounds like your concern is when two parts of your application are listening to the same location in the database, via two different reference objects. This is not a problem. The SDK will not duplicate the data sent across the connection for each different reference object, if they point to the same location. You can be sure that bandwidth use will be minimized to only what's necessary to satisfy all the references being listened to at any given moment.
I have a REST API that will be facilitating CRUD from multiple databases. These databases all represent the same data for different locations within the organization (IE We have 20 or so implementations of a software package and we want to read from all of the supporting databases via one API).
I was wondering what the "Best Practice" would be for facilitating what database to access resources from?
For example, right now in my request headers I have a custom "X-" header that would represent the database id. Unfortunately, this sort of thing feels a bit like a workaround.
I was thinking of a few other options:
I could bake the Database Id into the URI (/:db_id/resource/...)
I could modify the Accept Header like someone would with an API version
I could split up the API to be one service per database
Would one of the aforementioned options be considered "better" than the others, and if not what is considered the "best" option for this sort of architecture?
I am, at the moment, using ASP.NET Web API 2.
These databases all represent the same data for different locations within the organization
I think this is the key to your answer - you don't want to expose internal implementation details (like database IDs etc.) outside your API - what if you consolidate? or change your internal implementation one day?
However, this sentence reveals a distinction that is meaningful to the business - the location.
So - I'd make the location part of the URI:
/api/location/{locationId}/resource...
Then map the locationId internally to a database ID. LocationId could also be a name, or a code, or something unique that would be meaningful to the API client.
Then - if you later consolidate multiple locations to the same database or otherwise change your internal implementation, the clients don't have to change.
In addition, whoever is configuring the client applications, can do so thinking about something meaningful to the business - the location they are interested in.
We are using Micro services architecture where top services are used for exposing REST API's to end user and backend services does the work of querying database.
When we get 1 user request we make ~30k requests to backend service. We are using RxJava for top service so all 30K requests gets executed in parallel.
We are using haproxy to distribute the load between backend services.
However when we get 3-5 user requests we are getting network connection Exceptions, No Route to Host Exception, Socket connection Exception.
What are the best practices for this kind of use case?
Well you ended up with the classical microservice mayhem. It's completely irrelevant what technologies you employ - the problem lays within the way you applied the concept of microservices!
It is natural in this architecture, that services call each other (preferably that should happen asynchronously!!). Since I know only little about your service APIs I'll have to make some assumptions about what went wrong in your backend:
I assume that a user makes a request to one service. This service will now (obviously synchronously) query another service and receive these 30k records you described. Since you probably have to know more about these records you now have to make another request per record to a third service/endpoint to aggregate all the information your frontend requires!
This shows me that you probably got the whole thing with bounded contexts wrong! So much for the analytical part. Now to the solution:
Your API should return all the information along with the query that enumerates them! Sometimes that could seem like a contradiction to the kind of isolation and authority over data/state that the microservices pattern specifies - but it is not feasible to isolate data/state in one service only because that leads to the problem you currently have - all other services HAVE to query that data every time to be able to return correct data to the frontend! However it is possible to duplicate it as long as the authority over the data/state is clear!
Let me illustrate that with an example: Let's assume you have a classical shop system. Articles are grouped. Now you would probably write two microservices - one that handles articles and one that handles groups! And you would be right to do so! You might have already decided that the group-service will hold the relation to the articles assigned to a group! Now if the frontend wants to show all items in a group - what happens: The group service receives the request and returns 30'000 Article numbers in a beautiful JSON array that the frontend receives. This is where it all goes south: The frontend now has to query the article-service for every article it received from the group-service!!! Aaand your're screwed!
Now there are multiple ways to solve this problem: One is (as previously mentioned) to duplicate article information to the group-service: So every time an article is assigned to a group using the group-service, it has to read all the information for that article form the article-service and store it to be able to return it with the get-me-all-the-articles-in-group-x query. This is fairly simple but keep in mind that you will need to update this information when it changes in the article-service or you'll be serving stale data from the group-service. Event-Sourcing can be a very powerful tool in this use case and I suggest you read up on it! You can also use simple messages sent from one service (in this case the article-service) to a message bus of your preference and make the group-service listen and react to these messages.
Another very simple quick-and-dirty solution to your problem could also be just to provide a new REST endpoint on the articles services that takes an array of article-ids and returns the information to all of them which would be much quicker. This could probably solve your problem very quickly.
A good rule of thumb in a backend with microservices is to aspire for a constant number of these cross-service calls which means your number of calls that go across service boundaries should never be directly related to the amount of data that was requested! We closely monitory what service calls are made because of a given request that comes through our API to keep track of what services calls what other services and where our performance bottlenecks will arise or have been caused. Whenever we detect that a service makes many (there is no fixed threshold but everytime I see >4 I start asking questions!) calls to other services we investigate why and how this could be fixed! There are some great metrics tools out there that can help you with tracing requests across service boundaries!
Let me know if this was helpful or not, and whatever solution you implemented!
So I have an application I am working on at work that we have a few hundred clients running on. We are working on a brand spanking new ASP.NET MVC 3 app for it, and I am working on the routes for this app.
I posted recently on a solution I came up with for dynamic routes, and it works fine on a few entries I have in a Sql Express DB. Essentiall it creates routes for every entry that I have in this DB.
So, my question is...If I were to implement this on an enterprise application, would the creation of several hundred if not thousands of routes added into my application have any negative consequences?
Concerning the dynamic route table, there is a recommendation which you seem to follow already:
Use named routes. Named routes are an
optional feature of routing. The names
only apply to URL generation - they
are never used for matching incoming
URLs. When you specify a name when
generating a URL we will only try to
match that one route. This means that
even if the named route you specified
is the 100th route in the route table
we'll jump straight to it and try to
match.
Besides the number of customers / routes, you should also consider the estimated number of requests per day (for which you should be worried more, IMHO), and take into account the scalability of your web server (worker threads, hardware, ...) in consequence.
If you clients use own domains with you application, use custom IRouteConstraint in routes to check request domain and filter only this routes. Tt's solution also protect routing from collisions.
So best way for you to both task - routing request and building links - use cached routing.
You can inherits and extend default MVC Route class
To speedup link building: Override GetVirtualPath that calculate hash from RouteData values and using it to put and get url values to and from cache.
To speedup routing: override GetRouteData also to use cashed RouteDate with url hash.
This solution may be requred more memory, but in most cases you have limited set of url on pages.
You question is slightly unclear. By "dynamic route" do you mean you go to the DB tier on a request to resolve the route or do you query against your db to create the source file for the route table?
In the first case performance should be constant. (The overhead of checking the DB against the number of users you have will not change.) So you should see performance effects right away.
In the second case I expect the routing code will be slower if it has that many items to check -- but it is easy to test.
There is defiantly a performance hit once you start to get over a certain threshold of routes. I don't have any hard benchmarks on this but I have redesigned a few poorly performing sites now.
The more you can use the same route for many different url's with parameters the better.
Just from observation it seems when you start to get close to 1k routes is when it starts really bottoming out.