When I set the domain in the dashboard to xxxxxx.com, where xxxxx is my domain, I am still unable to make API calls from the domain given.
The error I get when I attempt to make an API call is:
"Unauthorized. The request is not from an authorized source."
There is no issue with the code itself as it was working before I had set a specific domain. I've also ensured that the domain typed into the dashboard is correct.
Any help would be appreciated, will provide more information, if needed.
It takes a bit of time for the changes to take effect. Please wait a little while, then try again. If it still doesn't work, please report back.
Alternatively please note that you also need to add any subdomains you may be using. For example, if you add "here.com", you would also have to add "www.here.com" and "developer.here.com" if you wanted API calls to be accepted from these subdomains.
Related
recently i have been working on shodan to gather information for my recon report. when i search my target domain it just says Note: No results found how is that possible? the website is up and running, but its behind CloudFlare. but i still wonder why does it not give me any results at all? what might be the reason?
It shows no results on censys either it just says Your query returned no results. why is it not showing me any results? does it mean the domain is virtual? but even if it is virtual how does it not show any information at all?
This means the target domain could be blocking Shodan and Censys. I would try looking for an SSL cert in the Censys Certificates Dataset with the target name on it.
I'm wondering if anyone else has had this issue with Azure Front Door and the Azure Web Application Firewall and has a solution.
The WAF is blocking simple GET requests to our ASP.NET web application. The rule that is being triggered is DefaultRuleSet-1.0-SQLI-942440 SQL Comment Sequence Detected.
The only place that I can find an sql comment sequence is in the .AspNet.ApplicationCookie as per this truncated example: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH. If I remove the 2 dashes '--' in the cookie value, the request successfully gets through the firewall. As soon as I add them back the request gets blocked by the same firewall rule.
It seems that I have 2 options. Disable the rule (or change it from Block to Log) which I don't want to do, or change the .AspNet.ApplicationCookie value to ensure that it does not contain any text that would trigger a firewall rule. The cookie is generated by the Microsoft.Owin.Security.Cookies library and I'm not sure if I can change how it is generated.
I ran into same problem as well.
If you have a look to the cookie value: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH there are two -- which is the potentially dangerous SQL command that can comment out your SQL command that you're going to query. An attacker may run their command instead of your command - after commenting out your query.
But, obviously, this cookie won't run any query on the SQL side and we are sure about that. So we can create rule exclusions that won't run specific conditions.
Go to your WAF > Click Managed Rules on the left blade > Click manage exclusions on the top > and click add
In your case, adding this rule would be fine:
Match variable: Request cookie name
Operator: Starts With
Selector: .AspNet.ApplicationCookie
However, I use Asp.Net Core 3.1 and I use Asp.Net Core Identity. I encountered other issues as well, such as __RequestVerificationToken.
Here is my full list of exclusions. I hope it helps.
PS I think there is a glitch at the moment. If you have an IP restriction on your environment, such as UAT, because of these exclusions Web Application Firewall is by-passing the IP restriction and your UAT site becomes open to the public even if you have still custom IP restriction rule on your WAF.
I ran into something similar and blogged about it here: Front Door incomplete first request.
To test this I created a web application and put it behind the Front Door service. In that test application I iterate over all the properties of the HttpContext.HttpRequest and print them out. As far as I can see right now, there are two properties that have differences between a direct request and a request through Front Door. Both the AcceptTypes and the UserLanguages property are empty for Front Door requests, while they are absolutely filled in when directly accessing the test application.
I’m not quite sure what the reason is for the first Front Door request to be different from a direct request. Is it a bug? Is it intentional and if so, why? Or is it because Front Door is developed using a framework that doesn’t support these properties, having them be empty when being forwarded?
Unfortunately I didn't find a solution to the issue, but to answer the question if anyone else is experiencing this: I did experience something similar.
Seems that the cookie got corrupted , as I was comparing the fields that existed before vs a healthy cookie, my guess is maybe somewhere in the content of the field it is being interpreted as a truncate sql statement and probably triggering the rule. Still to determine if this is true and/or what cause it.
I ran into this issue but the token was being passed through via the request query rather than via a cookie. In case it might help someone, for the specified host I had to allow via a custom rule doing a regex match on the RequestUri, using the following regex (taken from the original managed rule):
:\/\\\\*!?|\\\\*\/|[';]--|--[\\\\s\\\\r\\\\n\\\\v\\\\f]|--[^-]*?-|[^\\u0026-]#.*?[\\\\s\\\\r\\\\n\\\\v\\\\f]|;?\\\\x00
Is it possible to create a page in asp.net that allow the access to a user that has a defined IPaddres? My goal is to add a page "test" (not linked to my website) and I want to define a rule that only a specified IP address can get the access.
How can I implement this throught asp.net?
You could try putting the page(s) in a separate folder and password protect it, then, give the password to your user, so they may access the content. You could go as far as password protecting each file. This helps if your website is password protected or has a login.
You could also create a sub-domain for that user specifically.
These are just a few. I'm sure you'll get better suggestions here on SO!
You could go for a programmatic solution. However, I would use IIS functions to block the access. Less code, easier to configure and no hassle on your developement/test environment.
Assumption: you are using IIS since it is ASP.NET. But other webservers should have similar solutions.
You can add IP restrictions to the directory (meaning you would have to put your page in a separate directory). Example here: http://www.therealtimeweb.com/index.cfm/2012/10/18/iis7-restrict-by-ip
Obviously there are a lot of other and arguably better ways to grant access to a page if what you really want is for a specific "user" or "group" to have access, but assuming that your really want the access control to be based on IP, the answer may still be dependent on peripheral concerns such as what web server you are using. IIS for example has some features for IP based security that you could check out.
Assuming though that you really, really want to check IPs and that you want to do it in code, you would find information about the calling environment in the Request of the current HttpContext, i.e. context.Request.UserHostAddress.
If you want to reject calls based on this information, you should probably do that as early as possible. In the HttpApplication.BeginRequest event you could check if the call is targeted for the page in question and reject the request if the UserHostAddress is not to your liking.
If you prefer to make this control in the actual page, do it in some early page event.
To manage the acceptable IP(s), rather than hard coding them into your checking code, I suggest you work with a ConfigurationSection or similar. Your checking code could be something similar to:
var authorizedIps =
authorizedIpConfiguration.Split(',').Select(ipString => ipString.Trim()).ToList();
isValid = authorizedIps.Any()
&& authorizedIps.Contains(context.Request.UserHostAddress);
If the check fails, you should alter the response accordingly, i.e. at least set its status code to 401 (http://en.wikipedia.org/wiki/List_of_HTTP_status_codes).
NB: There are a lot of things to consider when implementing security features, and the general recommendation would probably stand as "don't do it" - it's so easy to falter. Try to use well proven concepts and "standard implementations" if possible. The above example should not in itself be considered to provide a "secure" solution, as there are generally speaking many ways that restricted data can leak from you solution.
EDIT: From you comment to the answer given by nocturns2 it seems you want to restrict access to the local computer? If so, then there is a much easier and cleaner solution: Just check the Request.IsLocal property. It will return true only for requests originating from the local computer, see HttpRequest.IsLocal Property
(Also, you should really make sure that this "debug page" is not at all published when deploying your solution. If you manage that properly and securely, then perhaps you do not even need the access check any more. If you want debugging options in a "live" environment, you should probably look to HttpContext.Current.Trace or some other logging functionality.)
Well, I finally had to create an account here. Been using this for years and have often found my answer here, but not this time.
Well, I actually have found a lot of people with similar problems, but none of their solutions have helped me.
I have started on a new MVC3 project, so it's quite simple so far. I've made a handful before, so I kinda know what I'm doing (but not quite, obviously, why else be here ;-)
My problem is apparently a fairly common one: A request starts a new Session, even though the user already has one.
The most frustrating part of this is, it works perfectly on my hosted service, but is broken on localhost.
I have done a number of things to solve this:
There is no underscore in my computer's name.
The Session contains custom data (the error only occurs after user has logged in).
I have added the following to web.config (hmpf, guess you'll have to assume the gt / lt chars):
httpProtocol
customHeaders
clear /
add name="Access-Control-Allow-Origin" value="*" /
/customHeaders
/httpProtocol
and this too:
modules runAllManagedModulesForAllRequests="false"/
With InProc sessionstate, I have tried with 'cookieless' both true and false.
My hosts file contains nothing about localhost.
hm. Looking at this list I'm sure I've left some out. Some on purpose too, as they were hopeless (yes, even more than the above), and born from desperation.
As mentioned this is particularly unnerving as it works on my host - could there be some configuration settings I need to tweak on the dev server (VS2010)?
I've been working from the premise that the issue is due to cross-domain security (it thinks I'm coming from another domain).
The fail happens on this request:
url: 'http://localhost:50396/moody/changeBuilding/' + elem.selectedIndex,
It's part of the options array I use with the jQuery.ajax function.
I change the domain when uploading to the host, but only the part localhost:port, everything else in the application is identical.
I've been banging my head against this for 2 days now, and will miss my exam :-(
I'm determined to bury this 6 feet under, though.
I would be very grateful for any and all suggestions!
I change the domain when uploading to the host, but only the part localhost:port, everything else in the application is identical.
Reading the above, I image the session cookie isn't being sent because you're changing domains.
Let's sit back and think about how sessions work. Basically ASP.NET contains a collection of sessions and their data. When each request comes in, ASP.NET must map that request to an existing session OR create a new session for them.
So how does ASP.NET know what session belonged to each incoming request? Or know that it needs to create a new request? The only way to know this is if the request contained some information, a 'key', which told ASP.NET what session to give the request... or in the absence of this 'key', create a new session.
How does the request send this 'key'? Through cookies.
So therefore, if you change the domain, the cookies isn't going to be sent... so therefore, ASP.NET will create a new session for the request.
Have you tried using something like fiddler to make sure that the session cookie is being sent in the AJAX request. It should be sent if the domain is the same but it's work checking.
Edit: This SO post on changing ports is worth reading too.
Edit: Given the new information in Charlino's comments (and the sterling detective work carried out therein) if the problem is only on your local dev machine then the easiest way to work around your localhost/127.0.0.1 issue is by manually changing the browser url from 127.0.0.1:50396 to localhost:50396, logging in again to get the new cookie, then you are good to go.
I'm using the Organization Service URI to upload documents to our SharePoint site from notes and attachments. I'm using the code found here and all is working apart from where i set the organizationURI. I get an error of "metadata contains a reference that cannot be resolved". I have tried retyping the link in and everything i can think of but i always get this error.
The strange thing is that this was working a couple of days ago just fine, but when i tried it the next morning it refused to work and now wont do anything at all. Before this error i have now i was getting an error saying that the URI scheme is not valid. I don't know what could have caused this to stop working but i've tried all i can think of and need some help.
Thanks
EDIT: The error message has changed to "A proxy type with the name account has been defined by another assembly". Still not sure what it means, but i'm hoping this might be easier to fix
I'm not sure if this is the actual fix for this problem but i tried this and it seemed to work. So either it is the answer or i was just lucky and something else changed too, but anyway...
What i did was to change the way that i was connecting to the organization service. Before i was using user credentials, organization URI and home realm uri together to get the OrganisationServiceProxy in the form of OrganizationServiceProxy orgService = new OrganizationServiceProxy(organizationUri, homeRealmUri, cred, null);.
Now i'm using a longer method of first setting the discovery service with user credentials. Then together with them i set the discovery service proxy, which is then authenticated. Then i simply use a RetrueveOrganizationRequest / Response to get the organization service which i can then use in place of the original.
Hope that makes sense to people but if anyone wants i can put some code up showing what i did.