How to add asp.net authorization cookie to header request? - asp.net

Can somebody tell me how to add asp.net cookie authorization to a request header (via MVC3)?
Cookie:
Cookie: __RequestVerificationToken_Lw__=8+WZPGAaKtgkIPfbBovP1ZRP2qQKE3u67ueltnzcoCPH0nN1tUHdtgUorjlweUvn+zTJhkFeRuMShCOrbyHR5Xi3DOL4HCspXuVEOsWIr4Ape+l5MYPiFsQ6Lnw8LstqNjceWW9EaV24eA0mVxq2xTG18h/INNKLB8cRUiEn9DI=; .ASPXAUTH=C64A69436A8FC4A6DF5BC222982030C3CCF9E43FBCE335A47173236B4BA4B1CE762CBE6C9E9FDBB035D46C8F36228A61117F22DD55CF787D5E23A728F68B49DDF1A5D70FF3D33C8D16B06FC81894201E86DF93754B6021C9031CB4FBC5236DED952FB7244CE3217B659325A0614763B2E123002E5291EE8D8CEA7B2D7441F3EBB8176A71CDD6FEF3E545CDF46858174451D38890861664A55AF681A36C0B7CF1
Snippet of actual request:
Accept: text/*
Content-Type: multipart/form-data; boundary=----------KM7Ij5Ij5ei4gL6KM7ae0cH2Ef1ae0
User-Agent: Shockwave Flash
Host: localhost:82
Content-Length: 36874
Connection: Keep-Alive
Pragma: no-cache
Cookie: __RequestVerificationToken_Lw__=8+WZPGAaKtgkIPfbBovP1ZRP2qQKE3u67ueltnzcoCPH0nN1tUHdtgUorjlweUvn+zTJhkFeRuMShCOrbyHR5Xi3DOL4HCspXuVEOsWIr4Ape+l5MYPiFsQ6Lnw8LstqNjceWW9EaV24eA0mVxq2xTG18h/INNKLB8cRUiEn9DI=; .ASPXAUTH=C64A69436A8FC4A6DF5BC222982030C3CCF9E43FBCE335A47173236B4BA4B1CE762CBE6C9E9FDBB035D46C8F36228A61117F22DD55CF787D5E23A728F68B49DDF1A5D70FF3D33C8D16B06FC81894201E86DF93754B6021C9031CB4FBC5236DED952FB7244CE3217B659325A0614763B2E123002E5291EE8D8CEA7B2D7441F3EBB8176A71CDD6FEF3E545CDF46858174451D38890861664A55AF681A36C0B7CF1
------------KM7Ij5Ij5ei4gL6KM7ae0cH2Ef1ae0
<more stuff here...>

You don't. Cookies are automatically provided on each request when the cookie is set.
You set the cookie using FormsAuthentication class. Typically, this looks something like this:
FormsAuthentication.SetAuthCookie(model.UserName, false /*createPersistentCookie*/);

Related

Background images in css are not getting cached

I have some react code that is rendering content dynamically via React.createElement. As such, css is applied via an object. Elements in that dynamic generation can have background image, pointing to a public aws S3 bucket.
It seems that every time my components re-render, the background images are being fetched again from S3. This is delaying the page render. I have S3 meta-data for Cache-Control set on all the objects . Here are request and response headers for background image load -
Response header -
Accept-Ranges: bytes
Cache-Control: public, max-age=604800
Content-Length: 52532
Content-Type: application/octet-stream
Date: Sun, 06 Feb 2022 05:57:32 GMT
ETag: "f29655808a5f80627d9ea7f44058a5e3"
Last-Modified: Sun, 06 Feb 2022 05:55:10 GMT
Server: AmazonS3
x-amz-meta-filetype: IMAGE
Request Header -
Accept: image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,hi;q=0.8
Cache-Control: no-cache
Connection: keep-alive
Host: <bucket-name>s3.amazonaws.com
Pragma: no-cache
Referer: https://<my-domain>.com/
sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Sec-Fetch-Dest: image
Sec-Fetch-Mode: no-cors
Sec-Fetch-Site: cross-site
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36
I can see in Network tab that images are being loaded multiple times and it also shows data transfers being done everytime. What am I doing wrong here? Can someone please help finding the root cause. Thanks.
could it be a forgotten "disable cache" option selected in the network tab in the dev tools ? Because it seems the server responds with the correct type of cache headers.
If images are optimized and not huge, using base64 data url is a good solution.
const getBase64FromUrl = async (url) => {
const data = await fetch(url);
const blob = await data.blob();
return new Promise((resolve) => {
const reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = () => {
const base64data = reader.result;
resolve(base64data);
}
});
}
const image = getBase64FromUrl('the image url')
While creating the element you can use
background-image: `url(${image})`;
In addition, rarely do we serve from S3 directly, you should probably use cloudfront as a proxy to
reduce get request
reduce bandwidth charges
cache at cdn
better control of cache headers
hide your s3 real url
The reason you're seeing a network request is probably because you're using the Cache-Control: no-cache header in your request.
As seen here:
The no-cache response directive indicates that the response can be
stored in caches, but the response must be validated with the origin
server before each reuse, even when the cache is disconnected from the
origin server.
Cache-Control: no-cache
If you want caches to always check for content
updates while reusing stored content, no-cache is the directive to
use. It does this by requiring caches to revalidate each request with
the origin server.
Note that no-cache does not mean "don't cache". no-cache allows caches
to store a response but requires them to revalidate it before reuse.
If the sense of "don't cache" that you want is actually "don't store",
then no-store is the directive to use.
See here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#response_directives
Here is what a full request for a cached asset looks like on my network tab, when the asset returns 304 Not Modified from the validation request. (from S3) This is in a background: url context.

CORS Google Cloud - Authorization header in the OPTIONS via Axios

I have a CORS-enabled Spring Boot API that runs on Google Cloud Run and a Vue.js front end that runs on Firebase and uses Axios to make the calls to the back end.
The Problem is that when the front end wants to access the back end (Browser --> Google Clud), it fails with:
Access to XMLHttpRequest at 'https://<backend>' from origin 'https://<frontend>' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
If I access from LOCAL front end (also Browser) to a LOCAL back end, it works: The error above is not shown in the Browser console and I get the data).
If I make the OPTIONS or GET call from Postman to the Google Cloud back end, it works.
I noticed, that with Postman I need to include the Authorization header in the OPTIONS request to send the Bearer token to Google to make it work. The Browser does not send any Authorization header in the OPTIONS call, even if I add withCredentials: true to the Axios config like this:
const response = await axios({
method: 'post',
withCredentials: true,
url: 'https://<backend>',
headers: {
'Authorization': 'Bearer ' + gCloudToken
},
data: {
// data...
}
});
Isn't that a security problem, to send the token in the header? I mean, everyone can see the headers and then fake a call to the server.
Can anybody show how to send the Authorization header in the OPTIONS call via Axios or tell how to correctly handle this problem?
UPDATE 1:
The request from the browser looks like this:
OPTIONS /path/to/api HTTP/2
Host: <backend>-ew.a.run.app
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Access-Control-Request-Method: POST
Access-Control-Request-Headers: authorization,content-type
Referer: https://frontend.web.app/
Origin: https://frontend.web.app
Connection: keep-alive
And this is the response:
HTTP/2 403 Forbidden
date: Tue, 07 Jul 2020 23:57:27 GMT
content-type: text/html; charset=UTF-8
server: Google Frontend
content-length: 320
alt-svc: h3-29=":443"; ma=2592000,h3-27=":443"; ma=2592000,h3-25=":443"; ma=2592000,h3-T050=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
X-Firefox-Spdy: h2
As you can see, no CORS headers (like e.g. access-control-allow-origin) are present.
The cause of this issue is that by not allowing unauthenticated calls the CORS preflight are always rejected with a 403 error message.
There is already a feature request for Cloud Run in orther to support CORS and authentication with Cloudrun.
The Workaround I would see so far is to allow unauthenticated calls on the CloudRun and implement the authentication on your code, However this can have security disadvantages.

CalDAV client's Report method not requesting for the new event data

I am currently working on development of a caldav synchronization server layer for my calendar application. I am able to answer all the initial requested by the calendar client and currently stuck with the REPORT method.
When PROPFIND is done on Calendar, it asks for CTag and Sync-Token. I do answer this query by providing a CTag and Sync-Token (currently to mock the server, I dynamically generate these value and serve the client).
In the next query, the requested method is REPORT on the calendar as shown below:
Request from client:
REPORT URI /users/admin%40a.de/calendar/ PROTOCOL HTTP/1.1
----------------------------------------
Accept-encoding gzip, deflate
Accept */*
Connection keep-alive
Prefer return=minimal
Host **************
Brief t
User-agent Mac+OS+X/10.10.5 (14F27) CalendarAgent/316.1
Depth 1
Authorization Basic YWRtaW5AYS5kZTpwYXNz
Accept-language en-us
Content-type text/xml
Content-length 260
Request body: <?xml version="1.0" encoding="UTF-8"?>
<A:sync-collection xmlns:A="DAV:">
<A:sync-token>http://calserver.org/ns/sync-token/1</A:sync-token>
<A:sync-level>1</A:sync-level>
<A:prop>
<A:getcontenttype/>
<A:getetag/>
</A:prop>
</A:sync-collection>
Response from the Server:
Response header
Content-type: text/calendar; charset=UTF-8
Connection: keep-alive
Date Thu, 17 Dec 2015 19:35:40 GMT
Transfer-encoding chunked
Http/1.1 207 Multi-Status
Response body
<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<D:multistatus xmlns:D=\"DAV:\" xmlns:C=\"urn:ietf:params:xml:ns:caldav\" xmlns:E=\"urn:ietf:params:xml:ns:carddav\">
<D:response>
<D:propstat>
<D:href>/calendar/2601ddd19c1001.ics</D:href>
<D:prop>
<D:getcontenttype>text/calendar</D:getcontenttype>
<D:getetag>"334411222s12"</D:getetag>
</D:prop>
<D:status>HTTP/1.1 200 OK</D:status>
</D:propstat>
</D:response>
</D:multistatus>
Now my question is, In the server-response, I answer the REPORT method that a new event has been created by providing a new ETag value, but the data is not requested by the Client ??
How and when should I serve the calendar data of the new event and what would be the request from client ???
Content-type should not be text/xml, It should be text/calendar.
The etag needs to be surrounded by double-quotes.
I'm pretty sure it's a bad idea to use a + in the uri the way you did. If you want to encode a space, use %20 instead, but it's probably even better to completely avoid any sort of special encoding.
The response to a sync-collection report also must have the current sync-token in the response body. See https://www.rfc-editor.org/rfc/rfc6578#section-6.4
Rather confused by the multiple edits but the content-type in the response header should definitely be text/xml
Content-type: text/xml; charset=UTF-8

what are the post parameter in this post request

I have a page that I can get by GET request.
the response of that request is an html page that has a button called Search
when I called that button, a post request is fired and get response that is appended to the page. In other words, clicking on that button didn't give me a completely new page but It adds new content to that page.
I tried to use Live HTTP headers firefox extension to read the request in order to see the parameters that are being sent in the post request. This is what I get
POST /plugins/ad/buy.php?q=used+cars+dubai HTTP/1.1
Host: www.autodealer.ae
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://www.autodealer.ae/plugins/ad/buy.php?q=used+cars+dubai
Cookie: PHPSESSID=f2072a947619ef2d61b552f38e163d02; __utma=154876456.960352407.1397595567.1397595567.1397598041.2; __utmc=154876456; __utmz=154876456.1397595567.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __gads=ID=7a5bda3c29913b41:T=1397595570:S=ALNI_MZg2J44DRK3D1j8CX4FpZWFHWIzuw; __utmb=154876456.1.10.1397598041
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 131
vehcategory=All&make=All&model=&platename=&pricefrom=%3C500&priceto=All&city=All&sort=postdate&results_listing=1MONTH&Search=Search
I have read many questions on StackOverFlow website and I learned that the post parameters exit in the request body. and in my situation, the content type is application/x-www-form-urlencoded so the post parameters should be as a query string
My Question
where is the post parameters in the above request? I just can see the cookies
HTTP GET Parameters are in the query string.
HTTP POST would have them inside the content body - so looks as though they are:
Content-Length: 131
vehcategory=All&make=All&model=&platename=&pricefrom=%3C500&priceto=All&city=All&sort=postdate&results_listing=1MONTH&Search=Search

uploadify: HTTP request sent to backend

I've a custom webserver which parses HTTP requests including multipart-form data.
In case of following, things are good:
<FORM method = "POST" action = "#" enctype multipart form-data multiple>
<INPUT type = "file"/>
<INPUT type = "submit"/>
</FORM>
The above HTML sends request which look like:
POST / HTTP/1.1
Host: 192.168.1.2:8888
User-Agent: Shockwave Flash
Connection: Keep-Alive
Cache-Control: no-cache
Accept: text/*
Content-Length: 884
Content-Type: multipart/form-data; boundary=----------------------------1193f8fd031d
img 1
----------------------------1193f8fd031d
img 2
----------------------------1193f8fd031d--
In case I use uploadify as per its docs, I donot get similar HTTP request at the backend.
How can I view HTTP request which uploadify sends?
On the server you can install Wireshark or try using fiddler on the client-side to view the HTTP communication.

Resources