Here's a very small sample Razor Page:
#page
#model IndexModel
#{
ViewData["Title"] = "Home page";
}
<h1>
#DateTime.Now.ToString()
</h1>
//model
public class IndexModel : PageModel
{
private readonly ILogger<IndexModel> _logger;
public IndexModel(ILogger<IndexModel> logger)
{
_logger = logger;
}
public void OnGet()
{
}
}
If I use this code, the time will update every 30 seconds which is what is intended:
<cache expires-after="TimeSpan.FromSeconds(30)">
<h1>
#DateTime.Now.ToString()
</h1>
</cache>
However, adding the ResponseCache attribute to the model doesn't do this:
[ResponseCache(Duration = 30)]
public class IndexModel : PageModel
After doing some research it seems like the attribute only sends the appropriate headers to the client, asking it to cache the content. How can I store the entire response in memory so when the user asks for the specific page, the server just sends the cached response and eliminate the process of computing the result again?
Also, with the <cache> tag helper, I couldn't find a way to invalidate the cached entry. So one scenario for me would be to cache every single page in memory for 30 days and if I change something on the admin panel, I would then invalidate the cache for that specific item so the next request would produce the fresh result. I used to do this on Asp.Net MVC 3+ but couldn't find any method to achieve the same result in Asp.Net Core 3.1
From your question it seems you might need to roll out your own version
How can I store the entire response in memory so when the user asks
for the specific page, the server just sends the cached response and
eliminate the process of computing the result again?
See the source of
https://github.com/dotnet/aspnetcore/blob/master/src/Mvc/Mvc.Core/src/ResponseCacheAttribute.cs
And
https://github.com/aspnet/Mvc/blob/d8c6c4ab34e1368c1b071a01fcdcb9e8cc12e110/src/Microsoft.AspNetCore.Mvc.Core/Internal/ResponseCacheFilter.cs
It seems that it sets headers only.
You may implement your own version of caching like this one
https://www.devtrends.co.uk/blog/custom-response-caching-in-asp.net-core-with-cache-invalidation
See CachedPage class in above example.
Edit: No, it doesn't work (I cannot delete this answer).
https://github.com/dotnet/AspNetCore.Docs/tree/master/aspnetcore/performance/caching/middleware/samples/3.x/ResponseCachingMiddleware
Browsers often add a cache control header on reload that prevent the
middleware from serving a cached page.
Therefore, it doesn't work and I don't see how this could be any useful besides machine-machine requests using specially crafted requests.
Use a developer tool that permits setting the request headers
explicitly, such as Fiddler or Postman.
I haven't tested it yet, but it looks like the "Response Caching Middleware" could be the answer.
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/response?view=aspnetcore-3.1
For server-side caching that follows the HTTP 1.1 Caching
specification, use Response Caching Middleware. The middleware can use
the ResponseCacheAttribute properties to influence server-side caching
behavior.
Response Caching Middleware in ASP.NET Core:
https://learn.microsoft.com/en-us/aspnet/core/performance/caching/middleware?view=aspnetcore-3.1
The middleware determines when responses are cacheable, stores
responses, and serves responses from cache.
You'll not always get a cached response, though.
The middleware respects the rules of the HTTP 1.1 Caching
specification. The rules require a cache to honor a valid
Cache-Control header sent by the client. Under the specification, a
client can make requests with a no-cache header value and force the
server to generate a new response for every request. Currently,
there's no developer control over this caching behavior when using the
middleware because the middleware adheres to the official caching
specification.
In addition
Caching should only be enabled for content that doesn't change based
on a user's identity or whether a user is signed in.
Related
I am creating an ASP.NET Core 6.0 Web API.
This is my controller
public virtual IActionResult GetStatus([FromHeader][Required()] string authorization, [FromRoute][Required] string requestId)
{ }
Problem is the request header is not accepting a parameter with name authorization. If I change name to something else it works fine. But when I change name to authorization, it is not working. I added a middleware and see that the request header does not have authorization in header.
My organization specification wants the name to be authorization only so I can not have any other name here.
Is there any way a Web API endpoint can accept a parameter named "authorization".
I tried creating a custom AuthenticationHandler, it does not solve the problem.
Is there any way a Web API endpoint can accept a parameter named
"authorization"
Reason: 1
Well, your issue is pretty obvious. You may hear about reserved keyword. Like in C# there are some compiler-reserve key for example static, class public and so on. We cannot set variable name as reserve key word.
If you navigate to System.Net.Http.Headers namespace and then move to below details, you would see Authorization is a reserve key word for HttpClientb request and its not wise to manipulate that system reserve word at all.
client.DefaultRequestHeaders.Authorization
Reason: 2
Likewise, if you use or attached Swagger in your program.cs so no longer you cannot use authorization, Content-Type, Accept as your self defined parameter, even if you use it will be skipped. However, Postman can allow you to do that. You can check swagger official reference here.
You can see the example below:
Swagger Request:
Postman Request:
Note:
Apparently, you can ommit swagger then it might work. Nonetheless, if you are using Authorization and Authentication within your application, then you might face unexpected issues if you still stick to use authorization as your custom parameter.
I have the following rest endpoint that I would like to send a cookie along with my ResponseEntity. However after succesfully sending the response, the cookie is nowhere to be found.
#RequestMapping(value = "myPath", method = RequestMethod.POST)
public ResponseEntity<?> createToken(HttpServletResponse response)
final String token = "a1b2c3d4e";
Cookie cookie = new Cookie("token", token);
response.addCookie(cookie);
// Return the token
return ResponseEntity.ok(new MyCustomResponse(token));
}
MyCustomResponse
class MyCustomResponse {
private final String token;
public MyCustomResponse(String token) {
this.token = token;
}
}
I have also tried creating the ResponseEntity manually and setting the cookie in the headers with the "Set-Cookie" header but same thing, no cookie.
edit: I have confirmed that the Set-Cookie header is in fact present in the response, however it is not actually being stored in the browser. I am using a static web-app running in WebStorm to access my rest endpoint running on a different port. This web app is just using JQuery's $ajax method to call the REST endpoint, nothing fancy. When I run the web app in WebStorm I am able to see the cookie it creates in my browser, so that confirms my browser is allowing cookie storage.
I figured it out. My web application I am using to access my rest api is running on a different local port than my rest api. This causes the AJAX request to fail CORS requirements, thus the cookie doesnt actually get set.
I found the solution here What can cause a cookie not to be set on the client?
edit: I should add that it was adding the xhrFields snippet to JQuery's $ajax method that fixed it for me, the other parts weren't necessary.
(posting the answer below in case it gets deleted)
I think I found the solution. Since during development, my server is at "localhost:30002" and my web app at "localhost:8003", they are considered different hosts regarding CORS. Therefore, all my requests to the server are covered by CORS security rules, especially Requests with credentials. "Credentials" include cookies as noted on that link, so the returned cookie was not accepted because I did not pass
xhrFields: {
withCredentials: true
}
to jQuery's $.ajax function. I also have to pass that option to subsequent CORS requests in order to send the cookie.
I added the header Access-Control-Allow-Credentials: true on the server side and changed the Access-Control-Allow-Origin header from wildcard to http://localhost:8003 (port number is significant!). That solution now works for me and the cookie gets stored.
I have a JavaScript app that sends requests to REST API, the responses from server have cache headers (like ETag, cache-control, expires). Is caching of responses in browser automatic, or the app must implement some sort of mechanism to save the data?
An AJAX request is no different from a normal request - it's a GET/POST/HEAD/whatever request being sent by the browser, and it is handled as such. This is confirmed here:
The HTTP and Cache sub-systems of modern browsers are at a much lower level than Ajax’s XMLHttpRequest object. At this level, the browser doesn’t know or care about Ajax requests. It simply obeys the normal HTTP caching rules based on the response headers returned from the server.
As per the jQuery documentation, caches can also be invalidated in at least one usual way (appending a query string):
cache (default: true, false for dataType 'script' and 'jsonp')
Type: Boolean
If set to false, it will force requested pages not to be cached by the browser. Note: Setting cache to false will only work correctly with HEAD and GET requests. It works by appending "_={timestamp}" to the GET parameters. The parameter is not needed for other types of requests, except in IE8 when a POST is made to a URL that has already been requested by a GET.
So in short, given the same headers, AJAX responses are cached the same way as other requests.
Browser handles automatically cache of resources. What you seem to be asking about is the actuall response from the server.
You will need to set that up yourself in your application. You can do so both on front-end and backend.
Most of JS frameworks have cache control implemented, for example:
jQuery
$.ajaxSetup({
// Disable caching of AJAX responses
cache: false
});
AngularJS
$http.defaults.cache = false;
etc.
On backend it really depends on what language you using, what server engine etc.
Checkout Memcached for example
http://memcached.org/
As with anything with web development there are odd things here and there, for example some IE versions are automatically chaching requests and you have to add unique id to the url to prevent that.
From https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started :
Note 2: If you do not set header Cache-Control: no-cache the browser will cache the response and never re-submit the request, making debugging "challenging." You can also append an always-diferent aditional GET parameter, like the timestamp or a random number (see bypassing the cache)
Browser should handle the cache automatically.
Check this article, there's only clear cache method for javascript.
https://developer.chrome.com/extensions/browsingData
If the server sends the response with any of cache headers browsers should respect it. there is no difference between resources or ajax requests.
also you can specify cache headers in your ajax calls to make it not use caches and fetch the whole response from the server.
Most modern browsers support browser caching because of the cache and expire headers
http://www.arlocarreon.com/blog/http/http-requests-and-your-browsers-cache/
There are 2 apects of an HTTP request that can qualify it for being cached:
- Specifc HTTP cache and expire headers
- A unique URL
Interesting read:
http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/
I am trying to implement a REST web service with WCF that supports both caching and Conditional GETs.
I implemented basic caching following the instructions in MSDN: Caching Support for WCF Web HTTP Services. That means adding an [AspNetCacheProfile("MyOutputCacheProfile")] attribute to each of my web methods and adding appropriate entries to web.config. That seems to work correctly: cached responses are returned when identical arguments are passed to the web methods.
Then I added support for Conditional GET by calculating an ETag value and setting that on the response like this:
WebOperationContext.Current.OutgoingResponse.SetETag(myETag);
That sorta works: I can see the ETag header in the response the first time I call the web method.
But here's the problem: The next time I invoke that web method with the same arguments, a cached response is returned, and the cached response does not include the ETag header. (If I wait until cache expiration, or disable caching entirely, then the ETag headers are returned properly.)
So, is there any way get the cached responses to include that ETag value?
Update: After some more study and experimentation, I find that doing this causes the ETag header to be included in all cached responses:
HttpContext.Current.Response.Cache.SetETag(myETag);
If I call that, then I don't need to call the associated WebOperationContext...SetETag() operation to make everything work.
Is this the Right Way to do this?
Correct me if I am wrong. Restful service are more close to Http and Http caching says that
The goal of caching in HTTP/1.1 is to eliminate the need to send
requests in many cases, and to eliminate the need to send full
responses in many other cases. The former reduces the number of
network round-trips required for many operations; we use an
"expiration" mechanism for this purpose (see section 13.2). The latter
reduces network bandwidth requirements; we use a "validation"
mechanism for this purpose (see section 13.3).
Asp.net caching does not fall in any one of this category(neither expiration nor validation).The caching is only done on web server and IIS instead of executing the method, sends the stored response. Some how it does not fit in RESTful model.
To implement caching, we should add Cache Control Headers and Etag to response headers and then try to handle conditional Get. Please consult this excellent article.
The HttpRequest class defines two properties:
HttpMethod:
Gets the HTTP data transfer method (such as GET, POST, or HEAD) used by the client.
public string HttpMethod { get; }
The HTTP data transfer method used by the client.
and RequestType:
Gets or sets the HTTP data transfer method (GET or POST) used by the client.
public string RequestType { get; set; }
A string representing the HTTP invocation type sent by the client.
What is the difference between these two properties? When would I want to use one over the other? Which is the proper one to inspect to see what data transfer method was used by the client?
The documentation indicates that HttpMethod will return whatever verb was used:
such as GET, POST, or HEAD
while the documentation on RequestType seems to indicate only one of two possible values:
GET or POST
I tested with a random sampling of verbs, and both properties seem to support all verbs, and both return the same values:
Testing:
Client Used HttpMethod RequestType
GET GET GET
POST POST POST
HEAD HEAD HEAD
CONNECT CONNECT CONNECT
MKCOL MKCOL MKCOL
PUT PUT PUT
FOOTEST FOOTEST FOOTEST
What is the difference between:
HttpRequest.HttpMethod
HttpRequest.RequestType
and when should I use one over the other?
Reflector shows that RequestType calls HttpMethod internally. So you're ever so slightly better off calling HttpMethod. Actually I think the real reason RequestType exists was for backwards compatibility with classic ASP.
You can check below article:-
Request methods:
An HTTP request made using telnet. The request, response headers and response body are highlighted.
HTTP defines eight methods (sometimes referred to as "verbs") indicating the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server.
HEAD
Asks for the response identical to the one that would correspond to a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.
GET
Requests a representation of the specified resource. Note that GET should not be used for operations that cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a request should cause. See safe methods below.
POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both.
PUT
Uploads a representation of the specified resource.
DELETE
Deletes the specified resource.
TRACE
Echoes back the received request, so that a client can see what intermediate servers are adding or changing in the request.
OPTIONS
Returns the HTTP methods that the server supports for specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.
CONNECT
Converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy.[5]
PATCH
Is used to apply partial modifications to a resource.[6]
HTTP servers are required to implement at least the GET and HEAD methods[7] and, whenever possible, also the OPTIONS method.[citation needed]
Safe methods
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers, which tend to make requests without regard to context or consequences.
Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way, and careless or deliberate programming can just as easily (or more easily, due to lack of user agent precautions) cause non-trivial changes on the server. This is discouraged, because it can cause problems for Web caching, search engines and other automated agents, which can make unintended changes on the server.