wrong protocol for crossdomain.xml in flex app - apache-flex

I've changed the protocol for my flex app from https to http and flashplayer still wants to download the crossdomain.xml using https though with the port for http.
the app is accessed at http://domain01:8080/flex and it wants to get https:..samedomain..:8080/crossdomain.xml (at https:..samedomain..no_port/flex it works fine).
Anyone any idea why?
Thanks a lot,
Daniel

No direct answer as I haven't tried this scenario of specifying a non-default port but a couple of piece of info that might lead you to an answer:
http://learn.adobe.com/wiki/download/attachments/64389123/CrossDomain_PolicyFile_Specification.pdf?version=1
This might be of interest:
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM
"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*.example.com" to-ports="507,516-523"/>
</cross-domain-policy>
or this:
10,0,12,0 site-control's permitted-cross-domain-policies default for non-socket policy files is "master-only"
Maybe try an older version of Flash Player to see if something in the changes from 9->10 is causing the issue then finding the change in the change logs might be easier or perhaps it's a bug in the new version.
Good luck
Shaun

Flex (Atleast 3.5 AFAIK..) gets some identify crisis when you change port and use Https... The security model depends on the port.. I do not know the exact reason for the problem, but my solution was to load the crossdomain file in your app explicitly..
System.security.loadPolicyFile('https://mydomain:port/crossdomain.xml');

When you run into crossdomain issues, it's worth remembering that by using the Security class, you can always take explicit control over what crossdomain.xml file is loaded (in fact, the policy file can have any name you want). The default behavior of loading the policy file from the root of a server can often be too restrictive when dealing with more complex, real-world cases (with load-balancing or reverse proxies, for instance).
Try using:
Security.loadPolicyFile(<URI to the policy file goes here>);
The ASDocs are here and explain it quite well.
By taking control of how policies are loaded, you can gain more freedom and take a lot of the guesswork out of what can otherwise be a painful, frustrating experience. The Flash Player allows you to load multiple policy files which is handy if you need to integrate with more than one service layer (e.g. on one host through HTTPS and another through HTTP).
Good luck,
Taylor

Related

Hosting static content on different domain from webservices, how to avoid cross-domain?

We've recently been working on a fairly modern web app and are ready to being deploying it for alpha/beta and getting some real-world experience with it.
We have ASP.Net based web services (Web Api) and a JavaScript front-end which is 100% client-side MVC using backbone.
We have purchased our domain name, and for the sake of this question our deployment looks like this:
webservices.mydomain.com (Webservices)
mydomain.com (JavaScript front-end)
If the JavaScript attempts to talk to the webservices on the sub-domain we blow up with cross domain issues, I've played around with CORS but am not satisfied with the cross browser support so I'm counting this out as an option.
On our development PC's we have used an IIS reverse proxy to forward all requests to mydomain.com/webservices to webservices.mydomain.com - Which solves all our problems as the browser thinks everything is on the same domain.
So my question is, in a public deployment, how is this issue most commonly solved? Is a reverse proxy the right way to do it? If so is there any hosted services that offer a reverse proxy for this situation? Are there better ways of deploying this?
I want to use CloudFront CDN as all our servers/services are hosted with Amazon, I'm really struggling to find info on if a CDN can support this type of setup though.
Thanks
What you are trying to do is cross-subdomain calls, and not entirely cross-domain.
That are tricks for that: http://www.tomhoppe.com/index.php/2008/03/cross-sub-domain-javascript-ajax-iframe-etc/
As asked how this issue is most commonly solved. My answer is: this issue is commonly AVOIDED. In real world you would setup your domains such as you don't need to make such ways around just to get your application running or setup a proxy server to forward the calls for you. JSONP is also a hack-ish solution.
To allow this Web Service to be called from script, using ASP.NET AJAX, add the following line to the first web service code-behind :
[System.Web.Script.Services.ScriptService]
You can simply use JSONP for AJAX requests then cross-domain is not an issue.
If AJAX requests return some HTML, it can be escaped into a JSON string.
The second one is a little bit awkward, though.
You have 2/3 layers
in the web service code-behin class, add this atribute : <System.Web.Script.Services.ScriptService()> _
maybe you need to add this in the System.web node of your web.config:
<webServices>
<protocols>
<add name="AnyHttpSoap"/>
<add name="HttpPost"/>
<add name="HttpGet"/>
</protocols>
</webServices>
In the client-side interface
-Add web reference to the service on the subdomain (exmpl. webservices.mydomain.com/svc.asmx)
Visual studio make the "proxy class"
-add functionality in the masterpage's|page's|control's code behin
-Simply call this functions from client-side
You can use AJAX functionality with scriptmanager or use another system like JQuery.
If your main website is compiled in .NET 3.5 or older, you need to add a reference to the namespace System.Web.Extensions and declare it in your web.config file.
If you have the bandwidth (network I/O and CPU) to handle this, a reverse proxy is an excellent solution. A good reverse proxy will even cache static calls to help mitigate the network delay introduced by the proxy.
The other option is to setup the proper cross domain policy files and/or headers. Doing this in some cloud providers can be hard or even impossible. I recently ran into issues with font files and IE not being happy with cross domain calls. We could not get the cloud storage provider we were using to set the correct headers, so we hosted them locally rather than have to deal with a reverse proxy.
easyXDM is a cross domain Javascript plugin that may be worth exploring. It makes use of standards when the browser supports them, and abstracts away the various hacks required when the browser doesn't support the standards. From easyXDM.net:
easyXDM is a Javascript library that enables you as a developer to
easily work around the limitation set in place by the Same Origin
Policy, in turn making it easy to communicate and expose javascript
API’s across domain boundaries.
At the core easyXDM provides a transport stack capable of passing
string based messages between two windows, a consumer (the main
document) and a provider (a document included using an iframe). It
does this by using one of several available techniques, always
selecting the most efficient one for the current browser. For all
implementations the transport stack offers bi-directionality,
reliability, queueing and sender-verification.
One of the goals of easyXDM is to support all browsers that are in
common use, and to provide the same features for all. One of the
strategies for reaching this is to follow defined standards, plus
using feature detection to assure the use of the most efficient one.
To quote easy XDM's author:
...sites like LinkedIn, Twitter and Disqus as well as applications run
by Nokia and others have built their applications on top of the
messaging framework provided by easyXDM.
So easyXDM is clearly not some poxy hack, but I admit its a big dependency to take on your project.
The current state of the web is that if you want to push the envelop, you have to use feature detection and polyfills, or simply force your users to upgrade to an HTML5 browser. If that makes you squirm, you're not alone, but the polyfills are a kind of temporary evil needed to get from where the web is to where we'd like it to be.
See also this SO question.

Can one rely exclusively on Adobe for Flex/Flash RSL's (to avoid using own bandwidth)?

If you have limited server resources and expect a lot of traffic to a Flash site, is there a way of NOT having to serve Run-Time Shared libraries, but rely on Adobe to do this for you?
For example, if you want to make sure "framework_4.0.0.14159.swz" is always fetched from "fpdownload.adobe.com" and not from your own server, what modifications should be made to the config section:
<runtime-shared-library-path>
<path-element>/opt/flex4/frameworks/libs/framework.swc</path-element>
<rsl-url>http://fpdownload.adobe.com/pub/swz/flex/4.0.0.14159/framework_4.0.0.14159.swz</rsl-url>
<policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url>
<rsl-url>framework_4.0.0.14159.swz</rsl-url>
<policy-file-url></policy-file-url>
</runtime-shared-library-path>
...
<static-link-runtime-shared-libraries>false</static-link-runtime-shared-libraries>
Also, is there any reason this might be a bad idea?
Please notice, I am using the command-line compiler, mxmlc.exe (not Flex Builder).
UPDATE:
I guess my issue is more about the errors I get at run-time than the above question. In fact, the reason Flash tries to download from my server to begin with is that the Adobe download fails (see error messages in my comment). I am therefore going to accept the below answer and, if I don’t succeed in solving the problem, I might open up another question.
from Using the framework RSLs:
Note: You can point to the SWZ files
that are hosted on the Adobe web site,
rather than deploy your own SWZ files
as RSLs. In this case, view the
default entries for the RSLs in the
flex-config.xml file to see how to
link to them.
You can, but you never should - adobe.com does go down sometimes, or the client maybe allowed access to your site and not Adobe's (because of a corporate firewall, for instance).

Is it commonplace/appropriate for third party components to make undocumented use of the filesystem?

I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.

Single ASP.net site with Multiple Instances & web.configs

We have a legacy ASP.net powered site running on a IIS server, the site was developed by a central team and is used by multiple customers. Each customer however has their own copy of the site's aspx files plus a web.config file. This is causing problems as changes made by well meaning support engineers to the copies of the source aspx files are not being folded back into the central source, so our code base is diverging. Our current folder structure looks something like:
OurApp/Source aspx & default web.config
Customer1/Source aspx & web.config
Customer2/Source aspx & web.config
Customer3/Source aspx & web.config
Customer4/Source aspx & web.config
...
This is something I'd like to change to each customer having just a customised web.config file and all the customers sharing a common set of source files. So something like:
OurApp/Source aspx & default web.config
Customer1/web.config
Customer2/web.config
Customer3/web.config
Customer4/web.config
...
So my question is, how do I set this up? I'm new to ASP.net and IIS as I usually use php and apache at home but we use ASP.net and ISS here at work.
Source control is used and I intend to retrain the support engineers but is there any way to avoid having multiple copies of the source aspx files? I hate that sort of duplication!
If you're dead-set on the single app instance, you can accomplish what you're after using a custom ConfigurationSection in your single web.config. For the basics, see:
http://haacked.com/archive/2007/03/12/custom-configuration-sections-in-3-easy-steps.aspx
http://msdn.microsoft.com/en-us/library/2tw134k3.aspx
Example XML might be:
<YourCustomConfigSection>
<Customers>
<Customer Name="Customer1" SomeSetting="A" Another="1" />
<Customer Name="Customer2" SomeSetting="B" Another="2" />
<Customer Name="Customer3" SomeSetting="C" Another="3" />
</Customers>
</YourCustomConfigSection>
Now in your ConfigSection Properties, expose Name, SomeSetting, and Another. When the Property is accessed or set, use a condition (request domain or something else that uniquely identifies the Customer) to decide which to use.
With the proper implementation, the app developers don't need to be aware of what's going on behind the scenes. They just use CustomSettings.Settings.SomeSetting and don't worry about which Customer is accessing the app.
I know it might seem annoying, but the duplication is actually a good thing. The problem here is with your process, not with the way the systems are setup.
Keeping the sites separate is actually a good thing. Whilst it looks like "duplication" it's actually not. It's separation. Making changes in the production code by your support engineers should be actively discouraged.
You should be looking at changing your process to change once deploy everywhere. This will make everything a lot easier for you in the long run.
To actually answer your question, the answer is no, you can't do it. The reason is that web.config isn't designed to store user level settings, it's designed to store per application instance settings. In your case, you need an application instance per user which means separate config files.
For your system to work, you need to be able to preemptively tell the application which config file to use, which isn't possible without some sort of input from the user.
Use an external source control application and keep rolling out updates as required.
It isn't really a good idea to let your live site be updated by support engineers in real time anyway.
Depending on what is actually in the web config, and what settings differ between customers, you could opt to use a single web config, and store other customer specific configuration options in a database or some other custom xml/text file. As long as the specific customer settings in the web.config don't have to do anything with how IIS operates, and you are just using it to store values, then this solution might work out well for you.
Thank you all again for your answers. After reading through them and having a think what I think I will do is leave the multiple instances alone for now and I will try to improve our update process first. then I will develop a new version of the application that has the user configuration information in the database layer and then pick the user based on the request domain or URL as someone suggested. That way I can have a single application instance supporting multiple different client configurations cleanly.
As most of the client configuration data is really presentation or data source related, nothing complicated. I think we ended up with multiple application instances mostly because the original programmer hadn't been expecting multiple customers and didn't design for that so when someone came along later and added a second customer they just duplicated the application which is wasteful as each instance is about 99.99% identical to the original.
I am implementing this as we speak.
In the main web.config, I have 1 item per installation. It points me toward the custom config file I built for each client (and toward the custom masterpage, css, images, etc).
Using WebConfigurationManager.OpenWebConfiguration, I open the new webconfigs in their subdirectories. I determine which one to use by using System.Web.HttpContext.Current.Request.Url.OriginalString, and determining the uRL that called me. Based on that URL, I know which web.config to use.
From that point forward the clients all use the same codebase. They have their own databases too.
The idea of having to update 30-40 installations when we make an update scares the death out of me. We do not want to support 30-40 codebases, so there won't be customization beyond the master page, css, and images.
I wrote a custom class lib that knows how to switch to the proper webconfig, and read the custom section I built with all our settings.
The only issue I have now is the FormsAuthentication Cookie. I need to be able to switch that as well. Unfortunately, the property for the name is read only
If I understand correctly, it sounds like you have multiple deployments (one for each client) where the only difference is the web.config, right?
First off, although I don't know your unique situation, I would generally urge you to stay with separate installs. It usually allows much more flexibility. Off the top of my head: are you ever going to have customizations, or different clients running different versions? Are you sure? The easiest way to stay flexible here is to keep going with separate installs.
In my opinion, it isn't ugly at all if your practices are aligned properly. Based on some things you mentioned, you have trouble in that area - obviously, possible source control buy-in/training issues. But you are aware of that. I would also take a hard look at your deployment procedures and so on. I have a feeling you might have further issues in that area, and I mean absolutely no offense.
That said, let's say you want to move forward with this.
You didn't say whether all the clients share a single common database, but I'm thinking no, since designing that type of system is often not worth the extra complexity (which can be severe in systems of any size) so people often opt to keep them separate.
What that means is that you have store your connection string somewhere. Usually that would be web.config... So that seems to break our plan.
Really, the apparent elegance of this situation is almost always wildly offset by the challenges it introduces. If I thought about it hard enough, I could maybe find a way around this by introducing another database that intelligently manages connection strings or maybe delving into keeping all your login info directly in web.config (which is possible but... not ideal), however my gut says the work will be wasted because some day you will end up going back to how you're doing it now.
Also: changing code directly in production is obviously not the best practice here. But you if you are on a monolithic shared platform with any amount of traffic, that can never ever ever happen. Food for thought.
Let me know if I'm missing something!

"Bootstrapping" a remote swf into the application SecurityDomain (actionscript3)

My Flash (AS3/AIR) application is currently using a slightly unusual architecture (for a Flash app) to provide particular base classes for loaded content at runtime. The external content is published with 'stub' base classes, which are eclipsed by the 'real' base classes at runtime when it is loaded. I've heard this referred to by Adobe as bootstrapping (pdf), and it has been working very well for me until now. It's not unlike a DLL architecture I believe, although I'm not qualified to say for sure.
Until now, the external content I have been loading has been loaded from within the same SecurityDomain (same sandbox), which allows me to easily load the content in a child ApplicationDomain. Unfortunately, as far as I can tell, an ApplicationDomains that span SecurityDomains cannot be related - that is, I cannot make an AppDom of one SecurityDom the child of an AppDom from another SecurityDom.
But now I need to load this external content from outside my Application sandbox. There are plenty of ways to achieve communication across SecurityDomains - although most of them are very limited, AIR's sandboxBridge API is probably the most powerful. Unfortunately, none of these communication methods allow me to achieve this bootstrapping architecture.
I notice that the LoaderContext object has a securityDomain property, but Flash security prohibits 'local swfs' from touching it (it throws a SecurityError or similar).
Flex's SWFLoader has a trustContent property that looks promising, but I'm inclined to assume that it has the same restrictions as setting the SecurityDomain in the Loader's LoaderContext.
I suspect I'll have to redesign (which won't be easy), but I thought I'd just check here that I've not missed something in my research.
So ... any ideas or pearls of wisdom? I'd especially freaking love it if someone from Adobe who works on the Security model could gimme a definitive "yes/no it can/can't be done"...
Thanks in advance!
Addendum: I've since decided to re-design the architecture so that the bootstrapping all happens on the external domain. My question still stands, however, out of curiosity.
Technically speaking, wouldn't it be possible for your AIR application to simply save the external SWFs inside your application directory, and load them from there so that they live in the same security sandbox?
However, there are some really obvious reasons why this would be Bad Karma, so it seems like any solution necessarily raises the question of whether trying to put local application content and remote untrusted content into the same app domain is the right architecture approach..?
At the time of writing, I determined that you cannot load an ApplicationDomain into your own SecurityDomain if it is from another domain, even with AIR.
By design, I guess.

Resources