Background: we ran into this inconvenient issue in our ASP.NET site where in one scenario the browser fired a huge number of request to the server, and since communicating over http/2 and utilizing concurrent request mechanism the ASP.NET request queue limit (*) was exceeded whereupon the requests get terminated and nasty 500 is returned from the server.
(*) yes, because reasons, that exists - https://learn.microsoft.com/en-us/dotnet/framework/migration-guide/retargeting/4.6.2-4.7.1#throttle-concurrent-requests-per-session)
One way to handle this is of course with better lazy loading handling to prevent the huge number of requests being fired at once (this has been done for this specific scenario).
However, another way, and what we find as a more overall solution for making this session request queue limit cooperate better with concurrent requests, is to tweak this queue limit by bumping setting aspnet:RequestQueueLimitPerSession, as discussed in this thread The request queue limit of the session is exceeded
However, we are insecure on how this setting are operating with ASP.NET Core projects? We can’t seem to find any information whatsoever regarding this, or information regarding how this request queue limit is set to work with concurrent requests overall. Does anyone have experience and some information regarding this setting in ASP.NET Core?
I'm currently using Axon Framework alongside Axon Server for my event-based microservices application. Recently I've encountered the need to wait for saga to fully complete and return its execution result (success or error) to the client, and I've solved it by creating a subscription query before dispatching a command that triggers the saga that locks then waits for updates that are being dispatched from saga and returns the result to client.
Now, that worked a treat in reporting on saga completion status to client, but now I've stumbled upon another seemingly connected problem. Every time a client queries our system's API, we perform an existence check of the client's account - and we do that by dispatching the corresponding query before we perform any business logic. After I've introduced the subscription query, when the client receives the response about saga completion status they immediately send a query to us for an updated list of certain entities, but the query that checks for account's existence fails with org.axonframework.queryhandling.NoHandlerForQueryException: No handler for query: ... which is returned by Axon Server upon sending despite the fact that there definitely is a handler registered for it and it's just handled exactly the same command during the previous request by client. This started to happen after I've added the inner subscription query mechanism to the equation.
This error disappears if we repeat the exact same query a bit later or put a delay of a couple hundred milliseconds between the calls, but that's certainly not a solution - what if our clients start to send loads of requests simultaneously, what will happen to account checking query? Are we unable to process some type of query when the subscription is not closed? I close the subscription on doFinally of Mono returned from SubscriptionQueryResult, but is there a chance that it doesn't actually get closed in Axon Server when the next query arrives? Or, which I think is closer to the truth, I need to somehow tune the query handling capacity of Axon Server? The documentation is rather concise on this topic, IMHO, especially concerning queries, not events/commands.
From the sound of it, you've hit a bug, #Ivan. I assume you are on Axon Server SE? It wouldn't hurt to construct an issue there describing the problem at hand.
As far as I know, the Subscription Query does not impact the capability to send other queries. So, it is not like a Query Handler suddenly unregisters once a subscription query is active. Or, it definitely shouldn't.
By the way, would you be able to share the versions of Axon Framework and Axon Server you are using? That helps with deducing the issue but also helps others that might hit the same problem.
A customer of ours is very persistent that they expect this "from any API" (meaning they don't want to pay for the changes). I seem to have trouble finding clear information on this though.
Say we have an API that creates an appointment for a calendar. Server-side everything was successful, data is committed to the database. API tries to send the HTTP 201 (Created) response, but something goes wrong there. Client ignores the response, or connection dropped, ...
They want our API to undo the database changes in that particular situation.
The question is not how to do this, but rather if this is something most APIs do? Is this standard behavior? Or something similar like refusing duplicate create requests?
The difficult part of course is to actually know if an API has failed to send the response, and as far as I am concerned with respect to the crux of the question, it is not a usual behavior implemented. If the user willingly inputs the data, you can go ahead and store it. If the response doesn't return properly due to timeouts (you are not responsible for user "ignoring" the response), then the client side code can refresh on failure and load fresh data. And the user can delete inputted data themselves(given you provide an endpoint for that)
Depending on the database, it is possible to make all database changes of an API reversible. For example, with SQL, you use [SQL transactions][1] using commit, rollback and savepoints. There is most likely a similar mechanism available for noSQL.
I have a WCF service that has a complex operationcontract that has to executed atomically i.e. either the entire operation succeeds or fails. The WCF service is hosted on IIS server in an ASP .NET application. This operation has a sequence of SQL commands that execute in a transaction. During tests I found that with concurrent access by 4 - 5 users, atleast one user gets "Transaction Deadlock" error.
I then looked at the serviceThrottling settings which I had set to
<serviceThrottling maxConcurrentCalls ="5" maxConcurrentInstances ="50" maxConcurrentSessions ="5" />
and changed it to
<serviceThrottling maxConcurrentCalls ="1" maxConcurrentInstances ="1" maxConcurrentSessions ="1" />
I have turned off session since I don't need in the service contract. So I don't know whether maxConcurrentSessions will be having any effect at all
<ServiceContract([Namespace]:="http://www.8343kf393.com", SessionMode:=SessionMode.NotAllowed)>
This way I was queuing up the requests so that the request are processed serially instead of concurrently. While the transaction issue got away, the process time increased which was expected.
I was wondering
Whether serviceThrottling is the only way to resolve this issue ?
How can I set serviceThrottling such that while the service will accept many requests at the same time but will process one at a time?
Is setting the InstanceContextMode=InstancePerContext.PerCall relevant here since the application is ASP .Net application which in itself is multithreaded ?
Well, i think your going about this the wrong way trying to solve a database deadlock with WCF throttling.
you should try to understand why your database operations causes a deadlock, and try to avoid it (by using maybe locking hints.)
a singleton will do what you ask , but that isnt very scalable.
it is relevant but i think you get my drift , solve the deadlock in the database not in WCF.
if its SQL server that you are using , theres a great tool to analyze deadlocks (and a lot more) and its called the SQL Profiler. Also its a fairly well documented topic in the SQL Books Online
Your changes caused the WCF service to function as a singleton instance. That fixed your database concurrency issue but it only pushed the process blocking into the client.
I'd recommend using a different approach to remove the client blocking penalty. You should consider making this service, or at least extracting that operation into a new service that uses a netMsmqBinding (a good overview is here). This means the client will never be blocked and it guarantees delivery of the request to the service. The tradeoff is there can be no immediate response to the request, you'll need to add another operation to poll for completion status and to retrieve any expected results. It does require more work to spin up an MSMQ based service but the reliability is usually worth the effort.
What is the REST protocol and what does it differ from HTTP protocol ?
REST is a design style for protocols, it was developed by Roy Fielding in his PhD dissertation and formalised the approach behind HTTP/1.0, finding what worked well with it, and then using this more structured understanding of it to influence the design of HTTP/1.1. So, while it was after-the-fact in a lot of ways, REST is the design style behind HTTP.
Fielding's dissertation can be found at http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm and is very much worth reading, and also very readable. PhD dissertations can be pretty hard-going, but this one is wonderfully well-described and very readable to those of us without a comparable level of Computer Science. It helps that REST itself is pretty simple; it's one of those things that are obvious after someone else has come up with it. (It also for that matter encapsulates a lot of things that older web developers learnt themselves the hard way in one simple style, which made reading it a major "a ha!" moment for many).
Other application-level protocols as well as HTTP can also use REST, but HTTP is the classic example.
Because HTTP uses REST, all uses of HTTP are using a REST system. The description of a web application or service as RESTful or non-RESTful relates to whether it takes advantage of REST or works against it.
The classic example of a RESTful system is a "plain" website without cookies (cookies aren't always counter to REST, but they can be): Client state is changed by the user clicking a link which loads another page, or doing GET form queries which brings results. POST form queries can change both server and client state (the server does something on the basis of the POST, and then sends a hypertext document that describes the new state). URIs describe resources, but the entity (document) describing it may differ according to content-type or language preferred by the user. Finally, it's always been possible for browsers to update the page itself through PUT and DELETE though this has never been very common and if anything is less so now.
The classic example of a non-RESTful system using HTTP is something which treats HTTP as if it was a transport protocol, and with every request sends a POST of data to the same URI which is then acted upon in an RPC-like manner, possibly with the connection itself having shared state.
A RESTful computer-readable (i.e. not a website in a browser, but something used programmatically) system would obtain information about the resources concerned by GETting URI which would then return a document (e.g. in XML, but not necessarily) which would describe the state of the resource, including URIs to related resources (hypermedia therefore), change their state through PUTting entities describing the new state or DELETEing them, and have other actions performed by POSTing.
Key advantages are:
Scalability: The lack of shared state makes for a much more scalable system (demonstrated to me massively when I removed all use of session state from a heavily hit website, while I was expecting it to give a bit of extra performance, even a long-time anti-session advocate like myself was blown-away by the massive gain from removing what had been pretty slim use of sessions, it wasn't even why I had been removing them!)
Simplicity: There are a few different ways in which REST is simpler than more RPC-like models, in particular there are only a few "verbs" that are ever possible, and each type of resource can be reasoned about in reasonable isolation to the others.
Lightweight Entities: More RPC-like models tend to end up with a lot of data in the entities sent both ways just to reflect the RPC-like model. This isn't needed. Indeed, sometimes a simple plain-text document is all that is really needed in a given case, in which case with REST, that's all we would need to send (though this would be an "end-result" case only, since plain-text doesn't link to related resources). Another classic example is a request to obtain an image file, RPC-like models generally have to wrap it in another format, and perhaps encode it in some way to let it sit within the parent format (e.g. if the RPC-like model uses XML, the image will need to be base-64'd or similar to fit into valid XML). A RESTful model would just transmit the file the same as it does to a browser.
Human Readable Results: Not necessarily so, but it is often easy to build a RESTful webservice where the results are relatively easy to read, which aids debugging and development no end. I've even built one where an XSLT meant that the entire thing could be used by humans as a (relatively crude) website, though it wasn't primarily for human-use (essentially, the XSLT served as a client to present it to users, it wasn't even in the spec, just done to make my own development easier!).
Looser binding between server and client: Leads to easier later development or moves in how the system is hosted. Indeed, if you keep to the hypertext model, you can change the entire structure, including moving from single-host to multiple hosts for different services, without changing client code at all.
Caching: For the GET operations where the client obtains information about the state of a resource, standard HTTP caching mechanisms allow both for statements that the resource won't meaningfully change until a certain date at the earliest (no need to query at all until then) or that it hasn't changed since the last query (send a couple hundred bytes of headers saying this rather than several kilobytes of data). The improvement in performance can be immense (big enough to move the performance of something from the point where it is impractical to use to the point where performance is no longer a concern, in some cases).
Availability of toolkits: Because it works at a relatively simple level, if you have a webserver you can build a RESTful system's server and if you have any sort of HTTP client API (XHR in browser javascript, HttpWebRequest in .NET, etc) you can build a RESTful system's client.
Resiliance: In particular, the lack of shared state means that a client can die and come back into use without the server knowing, and even the server can die and come back into use without the client knowing. Obviously communications during that period will fail, but once the server is back online things can just continue as they were. This also really simplifies the use of web-farms for redundancy and performance - each server acts like it's the only server there is, and it doesn't matter that its actually only dealing with a fraction of the requests from a given client.
REST is an approach that leverages the HTTP protocol, and is not an alternative to it.
http://en.wikipedia.org/wiki/Representational_State_Transfer
Data is uniquely referenced by URL and can be acted upon using HTTP operations (GET, PUT, POST, DELETE, etc). A wide variety of mime types are supported for the message/response but XML and JSON are the most common.
For example to read data about a customer you could use an HTTP get operation with the URL http://www.example.com/customers/1. If you want to delete that customer, simply use the HTTP delete operation with the same URL.
The Java code below demonstrates how to make a REST call over the HTTP protocol:
String uri =
"http://www.example.com/customers/1";
URL url = new URL(uri);
HttpURLConnection connection =
(HttpURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.setRequestProperty("Accept", "application/xml");
JAXBContext jc = JAXBContext.newInstance(Customer.class);
InputStream xml = connection.getInputStream();
Customer customer =
(Customer) jc.createUnmarshaller().unmarshal(xml);
connection.disconnect();
For a Java (JAX-RS) example see:
http://bdoughan.blogspot.com/2010/08/creating-restful-web-service-part-45.html
REST is not a protocol, it is a generalized architecture for describing a stateless, caching client-server distributed-media platform. A REST architecture can be implemented using a number of different communication protocols, though HTTP is by far the most common.
REST is not a protocol, it is a way of exposing your application, mostly done over HTTP.
for example, you want to expose an api of your application that does getClientById
instead of creating a URL
yourapi.com/getClientById?id=4
you can do
yourapi.com/clients/id/4
since you are using a GET method it means that you want to GET data
You take advantage over the HTTP methods: GET/DELETE/PUT
yourapi.com/clients/id/4 can also deal with delete, if you send a delete method and not GET, meaning that you want to dekete the record
All the answers are good.
I hereby add a detailed description of REST and how it uses HTTP.
REST = Representational State Transfer
REST is a set of rules, that when followed, enable you to build a distributed application that has a specific set of desirable constraints.
It is stateless, which means that ideally no connection should be maintained between the client and server.
It is the responsibility of the client to pass its context to the server and then the server can store this context to process the client's further request. For example, session maintained by server is identified by session identifier passed by the client.
Advantages of Statelessness:
Web Services can treat each method calls separately.
Web Services need not maintain the client's previous interaction.
This in turn simplifies application design.
HTTP is itself a stateless protocol unlike TCP and thus RESTful Web Services work seamlessly with the HTTP protocols.
Disadvantages of Statelessness:
One extra layer in the form of heading needs to be added to every request to preserve the client's state.
For security we may need to add a header info to every request.
HTTP Methods supported by REST:
GET: /string/someotherstring:
It is idempotent(means multiple calls should return the same results every time) and should ideally return the same results every time a call is made
PUT:
Same like GET. Idempotent and is used to update resources.
POST: should contain a url and body
Used for creating resources. Multiple calls should ideally return different results and should create multiple products.
DELETE:
Used to delete resources on the server.
HEAD:
The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. The meta information contained in the HTTP headers in response to a HEAD request SHOULD be identical to the information sent in response to a GET request.
OPTIONS:
This method allows the client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action or initiating a resource retrieval.
HTTP Responses
Go here for all the responses.
Here are a few important ones:
200 - OK
3XX - Additional information needed from the client and url redirection
400 - Bad request
401 - Unauthorized to access
403 - Forbidden
The request was valid, but the server is refusing action. The user might not have the necessary permissions for a resource, or may need an account of some sort.
404 - Not Found
The requested resource could not be found but may be available in the future. Subsequent requests by the client are permissible.
405 - Method Not Allowed
A request method is not supported for the requested resource; for example, a GET request on a form that requires data to be presented via POST, or a PUT request on a read-only resource.
404 - Request not found
500 - Internal Server Failure
502 - Bad Gateway Error