Security Sandbox Violation: Lack of Policy File Permissions - apache-flex

I'm using as3httpclientlib to post data to my web service, but I'm continually
getting the following security violation. Does anyone know how to resolve this?
My crossdomain.xml file is below the security violation notice.
NOTE: I'm using apache to proxy requests to the web service, therefore the target url/port and the url/port serving the applet are the same -- i.e. http://192.168.100.101. Also, the crossdomain.xml file is located in the root of the web app which serves the applet rather the web service; however, since the requests are proxied the url for the file is http://192.168.100.101/crossdomain.xml
* Security Sandbox Violation * Connection to 192.168.100.101:80
halted - not permitted from
http://192.168.100.101/com-web/flex/ComUi.swf Error: Request for resource at
xmlsocket://192.168.100.101:80 by
requestor from
http://192.168.100.101/com-web/flex/ComUi.swf
is denied due to lack of policy file
permissions.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE cross-domain-policy SYSTEM
"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-http-request-headers-from domain="*" headers="*" secure="false" />
<allow-access-from domain="*" to-ports="80, 8080" />
</cross-domain-policy>
Thanks.

Did you tried to debug it with WireShark, see if the app sends the request on port 843 and if the server sends back the response via socket? It was not totally clear in your post if you already use a server app to serve the policy file, if not, you should, either the way, the link below should help.
If you need more info about how things work, you can check out this

Related

How can I switch an existing Azure web-role from http over to https

I have a working Azure web role which I've been using over an http endpoint. I'm now trying to switch it over to https but struggling mightily with what I thought would be a simple operation. (I'll include a few tips here for future readers to address issues I've already come across).
I have created (for now) a self-signed certificate using the powershell commands documented by Microsoft here and uploaded it to the azure portal. I'm aware that 3rd parties won't be able to consume the API while it has a self-signed certificate but my plan is to use the following for local client testing before purchasing a 'proper' certificate.
ServicePointManager.ServerCertificateValidationCallback += (o, c, ch, er) => true;
Tip: you need upload the .pfx file and then supply the password you used in the powershell script. Don't be confused by suggestion to create a .cer file which is for completely different purposes.
I then followed the flow documented for configuring azure cloud services here although many of these operations are now done directly through visual studio rather than by hand-editing files.
In the main 'cloud service' project under the role I wanted to modify:
I imported the newly created certificate. Tip: the design of the dialog used to add the thumbprint makes it very easy to incorrectly select the developer certificate that is already installed on your machine (by visual studio?). Click 'more options' to get to _your_ certificate and then check the displayed thumbprint matches that shown in the Azure portal in the certificates section.
Under 'endpoints' I added a new https endpoint. Tip: use the standard https port 443, NOT the 'default' port of 8080 otherwise you will get no response from your service at all
In the web.config of the service itself, I changed the endpoint binding for the service so that the name element matched the new endpoint.
I then published the cloud project to Azure (using Visual Studio).
At this point, I'm not seeing the results I expected. The service is still available on http but is not available on https. When I try to browse for it on https (includeExceptionDetailInFaults is set to true) I get:
HTTP error 404 "The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable"
I interpret this as meaning that the https endpoint is available but the service itself is bound to http rather than https despite my changes to web.config.
I have verified that the publish step really is uploading the new configuration by modifying some of the returned content. (Remember this is still available on http.)
I have tried removing the 'obsolete' http endpoint but this just results in a different error:
"Could not find a base address that matches scheme http for the endpoint with binding WebHttpBinding. Registered base address schemes are [https]"
I'm sure I must be missing something simple here. Can anyone suggest what it is or tips for further trouble-shooting? There are a number of stack-overflow answers that relate to websites and suggest that IIS settings need to be tweaked but I don't see how this applies to a web-role where I don't have direct control of the server.
Edit Following Gaurav's suggestion I repeated the process using a (self-signed) certificate for our own domain rather than cloudapp.net then tried to access the service via this domain. I still see the same results; i.e. the service is available via http but not https.
Edit2 Information from csdef file... is the double reference to "Endpoint1" suspicious?
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="HttpsEndpoint" />
<Binding name="Endpoint1" endpointName="HttpEndpoint" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="HttpsEndpoint" protocol="https" port="443" certificate="backend" />
<InputEndpoint name="HttpEndpoint" protocol="http" port="80" />
</Endpoints>
<Certificates>
<Certificate name="backend" storeLocation="LocalMachine" storeName="My" />
</Certificates>

crossdomain.xml issue causing "send failed" error

In our environment we have both a primary and contingency server to ensure some continuity of operations in the event one site goes down. During testing of the contingency server we discovered a Flex based application will not allow users to login and returns a "Auth error: send failed". When monitoring network activity I see the following:
It looks to me like the cross domain policy is causing the issue as I do not see anything similar when I load the Flex application in our primary environment.
Here is the contents of the crossdomain.xml file found in our web root.
<?xml version="1.0"?>
<cross-domain-policy>
<allow-access-from domain="*" />
</cross-domain-policy>
What am I missing? Is the cross domain issue a red herring and there is something else going on that I am missing. Any suggestions on what to do?
EDIT: Not sure it matters but to be clear we are using HTTPS in both environments and when accessing the contingency server I use the fully qualified machine name as the DNS is set up to point to the primary server.

How to force soap header authentication for my scenario?

The problem is: I need to connect to a soap web service; generated by java code; using ASP.Net client via C# through MS Visual Studio 2013.
Try 1, The usual way:
I have added a web service reference using the wsdl and by assigning the credentials like:
Credentials.Username.Username = "test";
Credentials.Password.Password = "test";
When executing, the following exception is being encountered:
The login information is missing!
Try 2:
I have searched for similar problems like:
how-to-go-from-wsdl-soap-request-envelope-in-c-sharp
Dynamic-Proxy-Creation-Using-C-Emit
c# - Client to send SOAP request and received response
I had chosen to generate a proxy class using the wsdl tool, then added the
header attribute, but I have found the following note from Microsoft:
Note: If the Web service defines the member variables representing the SOAP headers of type SoapHeader or SoapUnknownHeader instead of a class deriving from SoapHeader, a proxy class will not have any information about that SOAP header.
Try 3:
I have tried to change the service model in the client web.config:
<bindings>
<basicHttpBinding>
<binding name="CallingCardServicePortBinding">
<security mode="TransportWithMessageCredential" >
<message clientCredentialType="UserName"/>
</security>
</binding>
</basicHttpBinding>
</bindings>
Then added the credentials like the first try, but the following error appears:
MustUnderstand headers:[{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security] are not understood
So, now I don't know what to do !
I have no control over the web service and I need to build a client that understands it.
Help Please!
The Soap Request template is the following:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ser="...">
<soapenv:Header>
<credentials>
<userName>someUserName</userName>
<password>somePassword</password>
</credentials>
</soapenv:Header>
<soapenv:Body>
<ser:someRequest>
.......
.......
.......
</ser:someRequest>
If the destination web service uses authentication, then just ASMX won't do, since it is not aware of authentication, encryption etc. You have 2 options:
Use Microsoft WSE: http://www.microsoft.com/en-us/download/details.aspx?id=14089
this is nothing but an extension of ASMX which makes it Security/Encryption aware. (and some other features) technically, you'll be adding a reference to the WSE DLL and your Soap Proxy will extend from the WSE SOAP Client instead of the System one.
once you do that, the proxy class will have additional username/password properties that you can use to authenticate properly.
set the properties and see the outgoing request using fiddler. if the header is not what you want (because of namespaces etc.), then you can write a custom outgoing message inspector and modify the soap request nicely.
The other option (preferred) is to use WCF.
ASMX and WSE are older than WCF. WCF tries to bring all the web service nuances under one roof. if you get a WCF service reference, it (svcutil.exe) will automatically create the proxy class and the right bindings for you. (mostly custom)
once you do that, try setting the user name and password.
if that doesn't work, (i have frequently struggled to generate the right soap header for remote java based services that require username/password authentication), you can define a static header chunk in the web.config/app.config, that'll be sent as part of every request.
e.g.
<client>
<endpoint>
<headers>
<credentials>
<userName>someUserName</userName>
<password>somePassword</password>
</credentials>
</headers>
</endpoint>
</client>

Can't access database on the remote server

I am a newbie, so bear with me. I have a SQL Server database located on a remote server. In Visual Studio 2010, I was able to create an entity data model (which contained user credentials so it designed remote database schema) and a simple WCF service. In localhost, I was able to fetch and retrieve data. But, after I published the ASP.NET project, I noticed that I can't query the same database. Both the database and the application files are now on the same server.
What could be the reason as to why the local environment can query the remote server but the deployed app can't? Do I need to reconfigure the data model or something else?
Check the connectionstring placed in the web.config for the following:
userid and password> Are they filled?
data source> Does this point to the server?
Check the error messages. What message does the WCF service return?
Silverlight crossdomain access file.
Create a new textfile and name it clientaccesspolicy.xml and put it in the root of your webproject.
File content:
(This allowed all and everyone, must be tweaked off course :D)
<?xml version="1.0" encoding="utf-8"?>
<access-policy>
<cross-domain-access>
<policy>
<allow-from http-request-headers="*">
<domain uri="*"/>
</allow-from>
<grant-to>
<resource path="/" include-subpaths="true"/>
</grant-to>
</policy>
</cross-domain-access>
</access-policy>

Flash Security Error Accessing URL with crossdomain.xml

I recently deployed a Flash application to a server, and am now experiencing errors when making HTTPService requests. I have put what I believe to be the most permissive crossdomain.xml possible in the wwwroot folder, and still get the errors.
Interestingly enough, the error only seems to occur when the request is made from a direct user interaction (i.e. button click). The application makes other requests that are initiated by other means(i.e creationComplete) , and they seem to work as expected.
Anyone see anything wrong with the crossdomain.xml, or have any other suggestions?
ERROR MESSAGE
[RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTP"]
at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::faultHandler()
at mx.rpc::Responder/fault()
at mx.rpc::AsyncRequest/fault()
at DirectHTTPMessageResponder/securityErrorHandler()
at flash.events::EventDispatcher/dispatchEventFunction()
at flash.events::EventDispatcher/dispatchEvent()
at flash.net::URLLoader/redirectEvent()
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<site-control permitted-cross-domain-policies="all" />
<allow-access-from domain="*" secure="false" />
<allow-http-request-headers-from domain="*" headers="*" secure="false" />
</cross-domain-policy>
You need to be careful with those crossdomain policy files because they can open up some serious security holes. You should never use a * policy on a site that uses cookie or basic auth and you should never put a * policy on an intranet server.
The easiest way to avoid those security problems and make things work is to make sure that the URL the SWF is loaded from and the URL the requests are being made to is the same protocol, hostname, and port (if specified). If they are different then you should look into using a proxy so that they are the same. BlazeDS or Apache ban easily be setup as a proxy.
try this
open config file with notepad
replace
http://servername/arcgis/rest/services/BaseMap/MapServer/#
with
http://serverip/arcgis/rest/services/BaseMap/MapServer/#

Resources