I inherited a complex AWS system from someone and I have practically no AWS experience. I'm reading documentation and doing training, but there's one thing I can't figure out: when someone hits a page served by CloudFront, are they able to make changes that affect the origin server?
I would have thought "no, they're just static pages", but I'm seeing evidence to the contrary. We have some Wordpress installs and I think the users are hitting Cloudfront when they log in through the admin panel remotely, but they're still able to make changes and publish content. I also at one point cached admin-ajax.php without allowing OPTIONS, PUT, PATCH, POST, and DELETE requests, thinking it doesn't matter because our front-end site doesn't use ajax. This broke the admin panel, which requires ajax, even when logged in directly to the origin server and bypassing Cloudfront.
The cached page would render in the browser, but when the user "makes changes" the browser is going to send a separate HTTP GET or POST request from the one that requested the page. The "changes" request will not be cached by CloudFront, but be forwarded to the origin server.
Note that you will need to configure CloudFront appropriately to respect the server's cache headers and treat parameters as cache keys, etc., in order for this to work properly. You can also configure CloudFront behaviors for a specific path, like /admin/* to prevent caching of the pages at that path among other things.
Related
If I set up a simple web server online (eg nginx), and generate a very large random string (such that it is unguessable), and host that endpoint on my domain, eg
example.com/<very-large-random-string>
would I be safe in say, hosting a webapp at that endpoint with no authentication to store my personal information (like a scratch-pad or notes kind of thing)?
I know google docs does this, is there anything special one has to do (again, eg for nginx) to prevent someone from getting a list of all available pages?
I guess I'm asking is there any way for a malicious actor to find out about the existence of such a page, preferably irrespective of what web-server I used.
I'd be pretty alarmed if my online bank started using this system, but it should give you a basic level of security. Bear in mind that this is security through obscurity, which is rather frowned upon and will immediately turn into no security whatsoever the moment someone discovers the hidden URL.
To prevent this from happening, you will need to take a few precautions:
Install an SSL certificate on your server, and always access the url via https, never via http (otherwise the URL path will be sent in plain view and visible to everyone along the way).
Make sure your secure document contains no outgoing links. This includes not only hyperlinks (<a href="...">) but also embedded images, stylesheets, scripts, media files and so on. Otherwise the URL will be leaked to other domains via the Referer request headers.*1
(A bit of a no-brainer, but) make sure there are also no inbound links to this page. Although they aren't so common now, web hosts used to generate automatic "web stats" pages showing the traffic to each web domain. Some content management systems generate a site map automatically. This would be just as bad.
Disable directory browsing on your server. In other words, make sure that someone who visits the directory level above your hidden directory isn't presented with a list of subdirectories.
Bear in mind that the URL will always be visible in your address bar and browser history, and possibly in other places like your browser's cookie jar. Your browser will probably provide the rest of the URL by auto-complete when someone types the domain into your address bar.
*1: Actually, your browser will only send a Referer header when you access other https pages, but still...
I've set up Edge Caching to cache HTML content. It works perfectly well when resources are hit either by a browser or via Curl. In both cases, the first request warms the cache and the second request is served directly from Cloudflare.
Through my logs, however, I've noticed that crawlers such as Bing, Yahoo, and Google do not appear to warm the cache.
When I visit urls previously hit by a crawler in the browser or via Curl, that subsequent request hits my origin server as well (according to my server logs).
Is this a matter of plan size (regular vs. Enterprise), bad configuration, or does Cloudflare special case crawler user agents?
If your site isn't visited from the locations Google would ordinarily crawl from, it might not be warm in the CloudFlare Cache.
You might see some benefit from setting a higher Edge Cache Expire TTL in CloudFlare, depending on how frequently search engines crawl your site; in order to do this you need to use CloudFlare's Page Rules.
If you want something more bespoke, it's probably best you reach out to CloudFlare's Enterprise Sales team.
I was using Fiddler see on-the-field how web sites use cookies in their login systems. Although I have some HTTP knowledge, I'm just just learning about cookies and how they are used within sites.
Initially I assumed that when submitting the form I'd see no cookies sent, and that the response would contain some cookie info that would then be saved by the browser.
In fact, just the opposite seems to be the case. It is the request that's sending in info, and the server returns nothing.
When fiddling about the issue, I noticed that even with a browser cleaned of cookies, the client seems to always be sending a RequestVerificationToken to the server, even when just looking around withot being signed in.
Why is this so?
Thanks
Cookies are set by the server with the Set-Cookie HTTP response header, and they can also be set through JavaScript.
A cookie has a path. If the path of a cookie matches the path of the document that is being requested, then the browser will include all such cookies in the Cookie HTTP request header.
You must make sure to be careful when setting or modifying cookies in order to avoid XSS attacks against your users. As such, it might be useful to include a hidden and unique secret within your login forms, and use such secret prior to setting any cookies. Alternatively, you can simply check that HTTP Referer header matches your site. Otherwise, a malicious site can copy your form fields, and create a login form to your site on their site, and do form.submit(), effectively logging out your user, or performing a brute-force attack on your site through unsuspecting users that happen to be visiting the malicious web-site.
The RequestVerificationToken that you mention has nothing to do with HTTP Cookies, it sounds like an implementation detail that some sites written in some specific site-scripting language use to protect their cookie-setting-pages against XSS attacks.
When you hit a page on a website, usually the response(the page that you landed on) contains instructions from the server in the http response to set some cookies.
Websites may use these to track information about your behavior or save your preferences for future or short term.
Website may do so on your first visit to any page or on you visit to a particular page.
The browser would then send all cookies that have been set with subsequent request to that domain.
Think about it, HTTP is stateless. You landed on Home Page and clicked set by background to blue. Then you went to a gallery page. The next request goes to your server but the server does not have any idea about your background color preference.
Now if the request contained a cookie telling the server about your preference, the website would serve you your right preference.
Now this is one way. Another way is a session. Think of cookies as information stored on client side. But what if server needs to store some temporary info about you on server side. Info that is maybe too sensitive to be exposed in cookies, which are local and easily intercepted.
Now you would ask, but HTTP is stateless. Correct. But Server could keep info about you in a map, whose is the session id. this session id is set on the client side as a cookie or resent with every request in parameters. Now server is only getting the key but can lookup information about you, like whether you are logged in successfully, what is your role in the system etc.
Wow, that a lot of text, but I hope it helped. If not feel free to ask more.
I would like to create web application with admin/checkout sections being secured. Assuming I have SSL set up for subdomain.mydomain.com I would like to make sure that all that top-secret stuff ;) like checkout pages and admin section is transferred securely. Would it be ok to structure my application as below?
subdomain.mydomain.com
adminSectionFolder
adminPage1.php
adminPage2.php
checkoutPagesFolder
checkoutPage1.php
checkoutPage2.php
checkoutPage3.php
homepage.php
loginPage.php
someOtherPage.php
someNonSecureFolder
nonSecurePage1.php
nonSecurePage2.php
nonSecurePage3.php
imagesFolder
image1.jpg
image2.jpg
image3.jpg
Users would access my web application via http as there is no need for SSL for homepage and similar. Checkout/admin pages would have to be accessed via https though (that I would ensure via .htaccess redirects). I would also like to have login form on every page of the site, including non-secure pages. Now my questions are:
if I have form on non-secure page e.g http://subdomain.mydomain.com/homepage.php and that form sends data to https://subdomain.mydomain.com/loginPage.php, is data being send encrypted as if it were sent from https://subdomain.mydomain.com/homepage.php? I do realize users will not see padlock, but browser still should encrypt it, is it right?
EDIT: my apologies.. above in bold I originally typed http but meant https, my bad
2.If on secure page loginPage.php (or any other accessed via https for that instance) I created session, session ID would be assigned, and in case of my web app. something like username of the logged in user. Would I be able to access these session variable from http://subdomain.mydomain.com/homepage.php to for example display greeting message? If session ID is stored in cookies then it would be trouble I assume, but could someone clarify how it should be done? It seems important to have username and password send over SSL.
3.Related to above question I think.. would it actually make any sense to have login secured via SSL so usenrame/password would be transferred securely, and then session ID being transferred with no SSL? I mean wouldnt it be the same really if someone caught username and password being transferred, or caught session ID? Please let me know if I make sense here cause it feels like I'm missing something important.
EDIT: I came up with idea but again please let me know if that would work. Having above, so assuming that sharing session between http and https is as secure as login in user via plain http (not https), I guess on all non secure pages, like homepage etc. I could check if user is already logged in, and if so from php redirect to https version of same page. So user fills in login form from homepage.php, over ssl details are send to backend so probably https://.../homepage.php. Trying to access http://.../someOtherPage.php script would always check if session is created and if so redirect user to https version of this page so https://.../someOtherPage.php. Would that work?
4.To avoid browser popping message "this page contains non secure items..." my links to css, images and all assets, e.g. in case of http://subdomain.mydomain.com/checkoutPage1.php should be absolute so "/images/image1.jpg" or relative so "../images/image1.jpg"? I guess one of those would have to work :)
wow that's long post, thanks for your patience if you got that far and any answers :) oh yeh and I use php/apache on shared hosting
If the SSL termination is on the webserver itself, then you'll probably need to configure seperate document roots for the secure and non-secure parts - while you could specify that these both reference the same physical directory, you're going to get tied in knots switching between the parts. Similarly if your SSL termination is before the webserver you've got no systematic separation of the secure and non-secure parts.
Its a lot tidier to separate out the secure and non-secure parts into seperate trees - note that if you have non-SSL content on a secure page, the users will get warning messages.
Regards your specific questions
NO - whether data is encrypted depends on where it is GOING TO, not where it is coming from
YES - but only if you DO NOT set the secure_only cookie flag - note that if you follow my recommendations above, you also need to ensure that the cookie path is set to '/'
the page which processes the username and password MUST be secure. If not then you are exposing your clients authentication details (most people use the same password for all the sites they visit) and anyone running a network sniffer or proxy would have access.
Your EDIT left me a bit confused. SSL is computationally expensive and slow - so you want to minimise its use - but you need to balance this with your users perception of security - don't keep switching from SSL to non-SSL, and although its perfectly secure for users to enter their details on a page served up by non-SSL which sends to a SSL page, the users may not understand this distinction.
See the first part of my answer above.
C.
I have a directory with my media files and I need no to display them on other sites.
Server doesn't support .htaccess, because it uses nginx.
How can I enable hotlink protection for my files??
Thank you.
Easiest way would be to check for the Referer header in HTTP request. Basically if that header does not have URL from your site, then this could be hot linking.
This has following problems:
Referrer header can be forged -> hot linking works
All user agents do not necessarily send the Referrer header -> legitimate user might not get the content.
You could also set a cookie when user is browsing your site, and check for existence of that cookie when user is accessing the streaming content.
The details may be dated, but Igor gives an example of referrer mapping for image hotlink protection that might be useful here: http://nginx.org/pipermail/nginx/2007-June/001082.html
If you decide to go the referrer route.
If you are using memcached you could also store store client IP addresses for a time and only serve up your streaming media if an unexpired client IP is found in the cache. The client IP gets cached during normal browsing ensuring that the person viewing your streaming content has also recently been visiting your site.
On my hostgator site, they used nginx as a proxy to Apache(nginx+apache). maybe that will help you. Also if you have access to the logs, if you see a lot of traffic that way from a ip I would investigate, and if it points to a site, then block the other web server. Php's file_get_contents doesn't get stopped by htaccess or anything else I know besides blocking the ip.