How to call the JSON service in Secure Manner in ASP.NET - asp.net

Hello i have certern API's which i am getting from service providers. The keys contains secured ID and password that we need to send with every request of API through JSON.
Presently i am using
$.ajax({
url: "http://api",
dataType: 'jsonp',
data : {'UserName':'abce','Password':'Password'}
success: function(results){
console.log(results);
}
});
So is there any way that i dont want to show that in the JSON request. I am creating application in ASP.NET. can you suggest me what we can do to encrypt that.

No, there is no way if you make the call from javascript. One possibility is to have a server side script on your domain which will act as a bridge. You could then send the AJAX request to your script which in turn will delegate the call to the remote service. You don't need JSONP in this case.

No there's no way for that.
You can hide the traveling information if you go through an https (witch provides you hardware based encryption for the tunnel). -> this avoids listening, but not middleman if the SSL will be provided by him
I suggest that you should use sessions + httponly cookies what makes sense in this case. Even the session will be captured, the identity can't be hacked. Put the communication over https and you did what you can.
[If the API is provided by 3rd party - then you have no chance at all]

Related

Implementing SSE (Server Sent Events) security

I am little new to SSE - server sent events implementation.
What I am trying to do is: to maintain a security check before connecting to SSE urls.
For ex- I have an SSE url which the clients will connect to , through EventSource:
new EventSource("http://my.example.com/deviceData");
So, not every client should be able to connect to it. I have to restrict it to some clients. How can I do that?
A code sample will be really helpful.
If the restriction is by IP, your server-side script can look at the headers and reject based on that. (It could also reject based on any of the other headers, but most of them can be forged, e.g. user-agent.)
If you are after the using logging in, you should use cookies. The simplest way is to have a login form on my.example.com that validates the user, and sends back a cookie. That cookie will then be sent to your SSE script, which can use its contents to validate the user. (If using this approach, you may also want to use https URLs: make sure both the login form and the SSE script are both on https, in that case.)

Filter response and store something in memcached using nginx+Lua

I have a backend which generates three JWT tokens - reference token, access token and refresh token. Reference token stores a reference to the access token, which is used to access API and refresh token is used to reissue access token when it is timed out. The problem is I do not want to pass access token to the client, but want to use nginx to store it in memcached. So, my whole task is to filter the response from the backend, which currently looks as simple as:
{"reference_token":"...","access_token":"...","refresh_token":"..."}
Nginx should filter this response, get access token from this response and store it in memcached. Finally, it should return to the client a new response:
{"reference_token":"...","refresh_token":"..."}
As you can see, there should be no access_token any more. Access token is something which I try to secure and not to show it and even pass it to the client. What I do not know, is what is the best approach to implement this, what Lua block should I use for this task. I know about body_filter_by_lua , but documentation shortly says that:
Note that the following API functions are currently disabled within this context due to the limitations in NGINX output filter's current implementation
So, it seems like body filtering is rather limited and I'm not even sure if it is possible to call memcached API inside this block. So, how can I implement my task in real world? At least, what Lua (openresty) tricks should I use to approach this task?
You may issue a subrequest (e.g., ngx.location.capture) to your backend within you content handler for example.
Next you may filter a body as you want and use then lua-resty-memcached which use cosocket API.
The drawback of this approach is that you would have full buffered proxy.

allow cross-domain requests to ASP.NET ScriptService

I've got a ASP.NET Webservice up and running using the [ScriptService] Attribute. From what I've read from this article:
http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx
ASP.NET by defaults does not allow JSONP requests (injected into the DOM via to deny cross-domain-requests. Its does so by taking 2 measures:
1) only accept POST requests (script injection via always does GET)
2) deny connections sending a HTTP header Content-type other than "Content-type: application/json" (which browsers will not send).
I am familiar with the cross-domain issues and I know what JSONP is and I fully understand, why ASP.NET is by default restricted in that way.
But now, I have my webservice which is a public one, and should be open to everybody. So I explicitly need to enable cross-domain requests via Javascript to my Webservice, so that external websites can retrieve data via my webservice from jquery and alike.
I've already covered step (1) to allow requests via GET by modifiying the ScriptMethod Attribute this way: [ScriptMethod(UseHttpGet=true)]. I've checked with jQuery, GET requests now work (on same-domain). But how to get to fix point (2)?
I know about the Allow-Origin-* headers some browsers support, but afaik its not standard yet, and I don't want to force my users / customers to modify their HTTP headers for using my webservice.
To sum it up: I need the good practice to enable Cross-domain requests for ScriptingService for public Webservices via JSON. I mean there MUST be a way to have a Webservice public, that is what most webservices are about?
Using legacy ASMX services for something like this seems like a lost cause. Try WCF which due to its extensible nature could very easily be JSONP enabled. So if you are asking for best practices, WCF is the technology that you should be building web services on the .NET platform.
Or if you really can't afford migrating to .NET 3.5 at the moment you could also write a custom http handler (.ashx) to do the job.
The jQuery ajax() function does have a 'crossDomain' property.
Pasted from jQuery.ajax()
crossDomain(added 1.5)
Default: false for same-domain requests, true for cross-domain requests
If you wish to force a crossDomain request (such as JSONP) on the same domain, set the value of crossDomain to true. This allows, for example, server-side redirection to another domain

Call an ASMX webservice in javascript without resetting ASP.NET authentication timers

Quick summary
Pinging back to a webservice in ajax from the client keeps the user's session alive, I don't want this to happen.
More extensive summary
For a website we're developing, we need the client to ping back (in js) to a webservice (ASMX) on the server (IIS7.5). This happens to let the server know that the user is still on the site, and hasn't browsed away to another site.
This is as our customer wants to let users lock records, but if they browse away to other sites, then let other people take over those locked records. Perhaps the distinction between the client being on the site but inactive and on another site seems unimportant, but that's kinda irrelevant, I don't get to write the UI spec, I just have to make it work.
My problem is this, the ping stops the user from being timed out on the server through the standard forms authentication timeout mechanism. Not surprising as there is that 30 second ping in the background keeping the session alive. Even though we want to know if the user is still on the site, we want the normal forms authentication timeout mechanism to be honoured.
I thought I might be able to fix this by removing the ASP.NET_SessionId and .ASPXAUTH cookies in the XMLHttpRequest that is the server ping, but I can't figure out how to do this.
This is how my web service & method are defined:
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ScriptService]
public class PingWS : WebService
{
[WebMethod]
public void SessionActive(string sessionID)
{
// does stuff here
}
This is how I'm calling it in js (request is over HTTPS, :
$.ajax({
type: "POST",
url: "PingWS.asmx/SessionActive",
data: 'sessionID=' + aspSessionID + '}',
beforeSend: function (xhr) {
xhr.setRequestHeader('Cookie', '');
xhr.setRequestHeader('Cookie', 'ASP.NET_SessionId=aaa; .ASPXAUTH=bbb;');
},
dataType: "json"
});
I was trying with the setRequestHeader, but that just appends to the header rather than overwrites the header, and IIS is happy to ignore that junk I added.
I'm thinking maybe I should be trying to do this at the server end, someone take PingWS.asmx out of the loop so that it doesn't keep the session active, but I'm not sure how to do this.
Although the title of the question is focused on clearing the cookie in the header, I'd be super happy if anyone points out that I'm being really stupid and there is actually a much better way of trying to do what I'm doing.
I'm thinking at this stage maybe I need to add something to the webmethod that says how long this particular page has been inactive, and use that knowledge to timeout manually. That actually sounds pretty easy, so I think I'll do that for now. I'm still convinced that there must be an easy way to do what I originally wanted to do though.
Update
I'm thinking I'm pretty screwed in terms of cookie manipulation here as both the .ASPXAUTH and ASP.NETSessionId cookie are HttpOnly, which means the browser takes them out of your hands, you can't access them via the document.cookies object. So I would say that leaves me with:
Updating my SessionAlive webmethod to track each request so I can tell how long the user has been sitting idle on a page and timeout if needs be
Marking the .asmx page somehow on the server end so that it's taken out of the normal authentication/session tracking flow
I know how to do 1. so I'll start there but 2. seems much cleaner to me.
You could restrict the authentication cookie to a given Path:
<authentication mode="Forms">
<forms path="/admin" />
</authentication>
This way the authentication cookie will only be sent to the /admin portion of your site and if you put your web service at the root you can ping it with AJAX without sending the cookie.
Another option is to simply host this webservice into a separate virtual directory.
As you're finding you're out of luck with client cookie manipulation (or clientanipulation of any kind). Manipulating headers might be possible but you'd have to intercept the traffic very early in the pipeline. I don't even know if it will be possible in the service itself but a traffic manager like Zeus could do it. I don't believe it's possible to configure the session engine in such a way as to ignore a given combination of endpoint and client request and although it should be possible to replace the entire session engine that will be undocumented and extremely time consuming I'd think.
Basically you need to manipulate traffic before it touches the service or you're not going to resolve this. Session was not designed to be variant.
There is a server side answer. Basically you disable sliding timeouts on forms authentication, then manually slide the timeout yourself, and skip it for the ping.
Pretty easy for me as every page uses the same root Site.Master and all pages inherit from the same base class.
Summed up here:
http://picometric.blogspot.com/2009/04/manual-sliding-expiration.html

How to secure my generic handler calls?

I am creating a myspace application and for some database entries I am using generic handlers which I have hosted on another website. From my myspace application I use ajax calls to those handlers to perform the activities that I want. I want to know how can I make these ajax calls secure? I mean I want to be sure that the handlers are being called by only the myspace app and not by entering url into the browser etc. Any ideas?
You can secure you Generic Web Handler by doing trick with UrlReferrer for e.g
if (context.Request.UrlReferrer == null)
{
context.Response.Write("Invalid Request");
return;
}
In addition you can check if UrlReferrer != null then domain Name must match with your incoming request url say for e.g.
if(Request.UrlReferrer.ToString().indexOf("http://www.tyamjoli.com")!=-1)
{
//Valid request
}
This is 100% impossible. Everyone will have access to your javascript and can modify it however they want. They can use TamperData to view all requests that the browser makes and drop/modify/replay them.
I don't know much about myspace apps but is there a server component to it? If so, you could first request a "token" from the app which would be the encrypted action and some arbitrary timeout, say 3 seconds. The token is then passed to the generic handler which decrypts it then checks the timeout. If valid, then the decrypted action is performed.
Outside factors such as network latency and un-synchronized clocks could keep some actions from being performed. This should hamper simple replay attacks but is still vulnerable to a scripted attack.

Resources