I tried to deploy my otherwise working flex app on a web server (tomcat 6). It threw a Channel.Security.Error. After some research, I became aware that flash movie loaded from flash_movie_domain will not be able to load resource from any other domain. Some suggested adding a crossdomain.xml. However, the crossdomain.xml route doesn't quite make sense.
In this case, I am loading resources from a third party web site. My understanding is that I need this third party website to include a crossdomain.xml on their root directory in order for app to function. The third party web service is provided as is. I will not be able to change what's given. Since the third party is providing public access, it already explicitly give permission to the general public. Adding a crossdomain.xml to their root seems to be a redundant act?
At the end of the day, I need to figure out a way to access the third party web service from a flash movie loaded from my domain. Thanks.
It sounds like you already have your answer.
This third party web site will need add a crossdomain.xml file that will allow the Flash Player to access data from this third party domain.
I'm unclear how this third party web site has provided you permission to access their data. But, the Flash Player is placed in a sandbox by the browser. The crossdomain.xml file gives permission for the Flash Player to move out of it's sandbox for the purpose of accessing the remote domain.
There is nothing redundant about saying something can do done; and giving providing the technical tools to help make it happen.
Your alternative is to not access the site from Flash. You might be able to use an intermediary proxy to retrieve the data and send it back to flex. But, it depends on the type of data.
Related
We're developing a plugin for ASP.NET web apps that can, ideally, be dropped into the folder that makes up the public facing part of our client's web site, i.e. a foo.aspx file that becomes publicly accessible at, say, /foo.aspx (and can thus be accessed by our browser component written in JS and loaded into the client's website).
However, we need something we're not really sure how to do idiomatically, which is the ability to pass a few configuration parameters to foo.aspx in a way that, preferrably, reuses whatever configuration mechanism our client's already using (we're assuming a sufficiently recent version of IIS) or at least something that's considered standard and can be applied to any IIS deployment.
Is IIS metabase something to look at? We need a way to pass in an SSL client certificate file (or path thereof) to foo.aspx so that it can talk back to our HTTP REST API via a secure channel. It's also not entirely obvious where's a good and standard disk location to drop the certificate file itself to.
I created a component in Flex which auto completes a couple of text inputs when users are typing an entry. When running the application from Flex, everything works fine. However, after I have compiled the application and load it, the auto complete does not work. Here is some background information.
Created in Adobe Flash Builder 4.5.
Web Application is running on an internal network.
The service which the auto complete uses is an external service.
The internal server which hosts the web application can load the URL of the external service just fine.
I am not sure if it this is a permissions issue or what. Any insight would be appreciated.
I had similar problems with receiving data from a web service. If the crossdomain file is not where it is supposed to be (webservice.domain.com/crossdomain.xml), you will receive a 404 error. So it does not sound like that is your issue. If your crossdomain file does not contain the right tags, however, it won't throw an HTTP error, but it still won't work either.
It won't work properly by default if you are going from an HTTP server (where your application resides) to an HTTPS server (where your service is). This is typically bad security practice, but if you decide that it is OK, you can use the secure="false" for the allow-access-from tag.
Also, you may need to include both the allow-access-from tag and the allow-http-request-headers-from tag to get the data you are looking for.
Here is the crossdomain policy file specification by Adobe and it is a good resource in figuring out what attributes are required for each tag: http://www.adobe.com/devnet/articles/crossdomain_policy_file_spec.html.
Good luck!
I have a flex app that is hosted on my server. It runs off an amfphp + mysql stack.
I have had inquiries from potential clients who want to "white label" the product. Part of this means that they would want it to appear that the app is running off their server. So their clients would login at www.theirsite.com instead of www.mysite.com.
I obviously dont want to give them the actual app...but are there ways of letting their server redirect to mine without the user actually knowing?
I don't have extensive experience with Flex, but I've worked a lot with Flash. Several ideas come to mind:
Create a wrapper: a SWF that loads your app. With the proper security settings, it should work. Check out this article on crossdomain.xml and the specs for Security.allowDomain()
You could simply embed the SWF from your site, similar to a YouTube video
If that is not satisfactory and your hosting services allow it, you can create a DNS A record or a DNS C record for a subdomain in there site, but you would support all the traffic that page has.
Probably the easiest way would be for them to alter their DNS record to make a CNAME entry (alias) that is a subdomain of theirsite.com:
yourapp.theirsite.com => www.mysite.com
Often, out of sheer desperation I will end up enabling "Everyone" access on a folder that a web app is accessing (perhaps for file creation, reading, etc) because I can't figure which user account to enable access on.
Obviously, this is a very bad thing to do.
Is there a way to determine what account IIS is using at that exact moment to access folders (and perhaps other resources like SQL Server, etc)?
Are there logs I can look at that will tell me? Or perhaps some other way?
I usually use Windows Auth without impersonation. Not sure if that information is relevant.
Another more general approach would be to use a tool like Process Monitor and add a path filter for anything that starts with the root of the website (ie c:\inetpub\wwwroot). You then have to add the Username as a column by right clicking on the column headers, but once you do that the w3wp.exe process should show up in whenever you try to access the website and it will show which user account is being used. This technique should work with all file access permission issues.
If you don't use Impersonation, application pool identity is used in most cases, but accessing SQL Server and UNC files are slightly different.
This MSDN article has all information in one place, but you really need to spare a lot of time on it in order to digest every details,
http://msdn.microsoft.com/en-us/library/ms998351.aspx
Use Sysinternals Process Monitor to see what is actually happening.
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
I am trying to get a grasp on how to handle updates to a live, functioning ASP.NET (2.0 or greater) Application while there are users on the site.
For example, suppose SO is an ASP.NET Web Application project. The project code compiles down to the single .DLL in the BIN folder. Now, there are constantly users on SO, so what would happen to users' actions/sessions if you would use the Visual Studio .NET "Publish" feature (or just FTP everything again manually) while they are using the site?
Would creating an ASP.NET Web Site, instead, alleviate any problems that may or may not exist with the scenario above? I am beginning to develop a web site as a user-driven Web Application, and I want to make sure that my inexperience with this would not potentially annoy the [potentially] many users that I [want to] have 24/7.
EDIT: Sorry, I should have put this in a more exact context. Assume that this site is being hosted by a web hosting service with monthly fees. I won't be managing the server itself, just what the web host allows as a user of their services.
I create two Web sites in IIS. One is the production Web site, and the other is a static Web site with an HttpHandler that sends all requests to a single static "We're updating" HTML page served with an HTTP 503 Service Unavailable. Typically the update Web site is turned off. When it's time to update, we stop the production Web site, start the update Web site, and now we can fiddle with the production Web site all we want without worrying about DLLs being locked or worker processes needing to be spun down.
I started doing this because
App_Offline.htm really does not work well in Web Gardens, which we use.
App_Offline.htm serves its page as 404, which is bad if you're down for a meaningful period of time.
We can start the upgraded production Web site with modified settings (only listening on localhost), where we can do a last-minute acceptance/verification that everything is working before we flip the switch, turning off the update Web site and re-enabling the production Web site.
Things this does not solve include
Any maintenance that requires a restart of the server--you still have downtime where no page is served.
Any maintenance that diddles with the .NET runtime, like upgrading to the latest service pack.
Other approaches I've seen include
Having two servers. Send all load balancing requests to one server, upgrade the other one; then rinse and repeat. Most of us don't have this luxury.
Creating multiple bin directories, like bin-1.0.0.0 and bin-1.1.0.0 and telling ASP.NET which bin directory to use in the web.config file. (One advantage of this is that reverting to a previous binary is just editing a config file. A disadvantage is that it's harder to revert resources that don't end up in your binaries, like templates and images and such.) I don't remember how this actually worked--I think the application did some late assembly loading in its Global.asax based on its own web.config section (since you touched the web.config, the app had restarted, so it was okay).
If you find a better way, let me know!
Changing to the asp.net web site model won't have any effect, as the recycle will also happen, some of changes that trigger it for sure: web.config, global.asax, app_code.
After the recycle, user will still be logged in because asp.net will just validate the syntax. That is given you use a fixed machine key, otherwise it will change on each recycle. This is something you want to do anyway as other stuff can break if the key change across requests i.e. viewstate validation, embedded resources (decryption of the url fails).
If you can put the session out of process, like in sql server, you will avoid loosing the session. If you can't, your code will have to consider that. There are plenty of scenarios where you can avoid using session, and others were you can wrap it and re-retrieve the info if the session was cleaned. This should leave you with a handful specific cases that you know can give trouble to the users, so for those you do some of the suggestions others have already made.
One solution could be to deploy your application into a load balanced environment (web farm).
When deploying a new version you would use the load balancer to redirect requests to the server you are not deploying to.
App_offline.htm is great solution for this I think.
in SO we see application currently unavailable page when a deployment begins.
I am not sure how SO handles it.. But we usually put a holding page. So what ever the user has done (adding question or answering questions) does not get updated. As soon as he updates something he will see a holding page asking him to try after sometime.
And if I am the user I usually press the back button to make sure what I entered is saved in the browser history so that I can post later.
Some site use use are in clustered environment so I take one server offline and inform the load balancer that she will not be available and once I make sure that the new version is working fine I make it live.. I do the same thing for the next server.
Do we have any other option?
It is not a technical solution, but set up a scheduled maintenance window. You can annoucement in advance giving your user base fair warning that there is a possiblity that the application will not be available during that time frame.