If browsers use http to connect to server ,and in any web application when we hit the URL and the request is received by a controller mapped to the URL ,can we say browsers are also rest client
That would depend entirely on what you use as a browser but generally no, a browser lacks meaningful tooling to probe a RESTful server out of the box, and comes with features that otherwise would not be needed by a REST client application, so would not be considered a REST client. A browser might be considered as a more generic HTTP client, but even that does not fully describe the problem domain of a browser (rendering, scripting, etc.). Even if you build a web interface to probe a REST service by submitting forms, that does not make the browser a REST client, but instead your website/web application would be the REST client application.
Yes,
the protocol the browser uses to communicate with the webserver clearly initially is a restful protocol.
Nothing more is necessary.
But it can get a bit more complicated.
The browser can fetch application code (javascript) in a restful way (e.g. GET) and execute that code which further can be communicating (Ajax) restful.
Related
There is a web application and a web service. These are in separate web servers. Web application is a consumer of web service.
The questions is which web server should http security headers (e.g. Strict-Transport-Layer, X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, etc.) be used in?
In web service's web server or web application's web server that is consumer, or both of them as dual use? Which one is reasonable?
If you understand what each of them do, you will be able to tell which is needed where.
Strict-Transport-Security is about ensuring that clients only use https (and not plain http) to access content. This of course needs a compliant client. All browsers are like that, and hopefully some other clients also consider this header. Even if not, you should just send it from both services and webapps.
X-XSS-Protection is about explicitly enabling some cross-site-scripting protection in browsers (this practically means not running javascript on a page where the request contained the same javascript, to prevent some reflected XSS). It is still the best practice to send this for web apps, though it is now the default in browsers, and this does not really prevent more advanced XSS at all (the app itself needs to be correctly implemented). For backend services that only serve text/json and not text/html, this is irrelevant, also it it irrelevant if the client is not a browser (but for example a webapp). You can still send it from services too, it won't do any harm.
X-Frame-Options is to mitigate clickjacking among some other more niche attacks. It basically prevents the browser from opening the page in a frame. If the client is not a browser, this doesn't make a lot of sense, however, this might have implications on data leaks if used together with CORS headers. So again, you can just send this from services too, but not strictly necessary in the base case.
X-Content-Type-Options: nosniff is to mitigate an attack that used to performed against older versions of Internet Explorer (and maybe some other browsers as well) where they incorrectly determined file types and sent content type accordingly, especially during file downloads. I think this is no longer feasible with modern browsers, but the best practice is to still send this. It has no effect for non-browser clients, but does no harm either.
So in short, most of these are only relevant for web applications, and not backend services that do not serve html. However, you can and probably should still send these from services too, they will just do nothing in most cases, and might help when for example an attacker make a user open a service response in a browser somehow.
I'm developing an app that needs to get info from a third party API. I've been developing it to be a web application with Vuejs. For the requests I tried to use axios, jquery and the fetch API, but I'm having trouble with the preflight requests, it seems that the API is not treating the OPTIONS requests properly and it throws me a 405 error (I made a GET request on the same url through Postman and it worked normally and I also edited a OPTIONS request on firefox network panel to become a GET request and it returned a 200 status).
Now I'm thinking of abandon the idea of the web application and work it as a desktop application, but I need to know if the preflight requests are going to be a default behavior in this kind of app too.
Thanks for your attention!
No, CORS preflight requests are made by browsers, and are necessary due to the browser security model. They would not be used by a desktop application.
You can easily test this with curl, postman, etc. It sounds like you tried this, but the details you've described are off. Don't change anything to GET. Use the actual request you're trying to make, but do it outside the browser context. If the API responds appropriately then it should work in a desktop application.
I know this question can be too generic but for purposes of narrowing the question, here is a brief description:
I'm planning to forget about ASP.net UpdatePanel and move to use ajax via JQuery. I am afraid that because of the plain, client-side nature of JavaScript (and consequently JQuery code), any one looking to my web page's source can realize what is the URL of the web services I'm calling and also what are being passed to those web services.
When using UpdatePanel for these types of operations, I'm sure that calling web services is done on server-side and I have no concern regarding issues of information on calling sensitive web services being exposed publicly but now that I'm planning to use Ajax via JQuery, It worries me alot.
Are my concerns reasonable and if true, what are the best solutions for avoiding the threats of web-service-calling-info being exposed?
Clarification: when saying UpdatePanel, I mean utilizing a chain of techiques including ASP.net AJAX, code-behind and relying on server-side Dlls for performing async server-side operations instead of jquery Ajax which requires web services for intracting with server.
There is no way on the internet to protect your web services all the time by just hiding the URL. I am not sure when you say your updatepanel does the web service call from the server you are not taking the true power of AJAX.
One way to secure your web service is to use the authentication in the web service side. For example you need to send some authentication key every time you access the source, and this is very common, you have so many public web service who protects it self using auth key like OpenId implementation. In case you do not want to change the web service logic I think jquery way of AJAX is not a secure option.
Here's a thought, you can have two levels of web service, one which will open for all that you can use in the jquery. From the current web service, from the server side call the other secure web service. Even now you can configure your incoming request for some specific machine IP.
In this case other than your own server no body else can access to the web service securely kept behind the firewall. It is something similar we do while connecting to database server from application server.
Let me know if this helps.
I'm going to state the problems my answer is hoping to solve:
Assuming you host your services on a machine other than the web server, the problem is you give potential attackers the name/address of those machines.
Attackers can write scripts/bots to scrape your data.
Attackers can focus on your web services and try to hack them/gain access to your network.
Attackers can try to perform a DoS/DDoS on your web services.
The solution I've used in the past is to create a light weight proxy on the web server such that all AJAX calls simply point back to the current domain. Then when a call comes in, it is simply routed to the appropriate web service, which is hosted somewhere internally on the network.
It creates one additional hop on the network, but it also has these benefits:
It hides the actual IP of the machine hosting your services.
You can easily lock down that one web server and monitor unusual activity. If you see a spike in activity, you can potentially shut down the web services. (If you use a different machine, you'd have to monitor two boxes. Not a huge problem, but easier to monitor just one.)
You can easily put a distributed caching layer in the proxy. This protects you from load/denial of service (DoS) attacks and obviously supports normal web service traffic.
You can hide the authentication at the proxy level. The public calls won't betray your authentication scheme. Otherwise an attacker can see what tokens or keys or secrets or whatever that you use. Making a proxy on the web server hides that information. The data will still flow through, but again you can monitor it.
The real benefit in my opinion is that it reduces the surface area of your application which narrows what an attacker can do.
Since you refer to ASP.Net, know its viewstate can easily be decrypted. There's no failproof ways to protect your code (not to say urls called).
If you're web services are called with some parameters that could allow unrestricted and dangerous actions, then you'd better start using some users/roles/rights management.
If you're worried about "man in the middle" attacks, you best option is to use https.
I'm very new to web services (please note, not WCF but the old fashioned .asmx files).
Now I may be liking this too much to ports, but if I expose a port on my web facing server then it is exposed to attacks as well as my own use; There are tools which can scan to see what ports are open.
Is this true of a web service? Now, don't get me wrong, I know each service should be coded well enough that nothing malicious can happen or that the calling class doesn't know the 'contract' to implement them, but that's not the question (and I guess port flooding could still occur?); If I put up a few web services on a server, is there a tool/program which can detect them (by name)?
Yes, a web service is basically a web page that takes arguments and response with a formatted result that can be read more easily by a program (technically both are a result of a http request and response - there are other mechanisms as well, but the typical one is over the http protocol).
If you type the link to your web service in a browser you will see you are presented with an interface that allows you to "execute" its services.
Therefor you need the same security as with a web page, meaning login or check of credentials, tokens, signing, encryption and so forth (preferably on a ssl-connection).
I have a Flex frontend connecting via RemoteObject to Zend Framework's Zend Amf. This is my only means to transport data between client layer (Flex) and the application and persistence layers (LAMP with Zend Framework).
Some ways I can address security are as follows:
I can address TLS by using mx.messaging.channels.SecureAMFChannel in my services-config.xml file and ensuring Flash player is loaded into a HTTPS wrapper and is in fact using HTTPS since the AMF protocol is layered on top of HTTP
RemoteObject has a setCredentials method with which I can pass AMF authentication headers to protect user related data. Assuming TLS was actually secure I can expose methods on the endpoint after authenticating the User.
I can protect against cross-site scripting and other FLASH vulnerabilities with a properly set up crossdomain.xml
The question I have is how to I protect my endpoint against another AMF consumer? For instance, if there were another AMF consumer (not Flash so not bound by crossdomain.xml and Flash sandbox security) other than my Flex client that knew my endpoint, what would stop it from using methods that the endpoint exposes?
As far as I know I essentially need a way to authenticate my Flex application against my Zend Amf endpoint. After AMF consumer authentication, I have some of the security mechanisms I mentioned above to protect certain pieces of data (like User authentication). I can not embed some sort of authentication mechanism into my Flex swf because the swf is vulnerable to decompilation (the swf can not be trusted). While sensitive data is protected via User authentication the unprotected data is hardly public but as far as I can tell is totally open for public consumption.
You cannot prevent anyone from sending arbitrary HTTP requests to your endpoint. If your Flex application authenticates users against the server, and the server only serves sensitive data if the request has proper credentials / session IDs on it, everything is fine. What you can not do is authenticate the user and only store within the client that the user is authenticated. Since HTTP is a stateless protocol, the server must be able to authorize each request individually. It's the same thing with "regular" websites and AJAX.
AMF client can not know who called them unless some sort of authentication is provided. Any HTTP request that Flex sends could be emulated by non-Flex means, and as you correctly noted, any embedded key could be extracted. So there's no generic solution for this, though you could probably work something out if you gave your client certificates for HTTPS connection and made the server check the client certificates.