Using stack behind proxy - pem

I work behind a proxy that uses a .pem certificate. How would I create a global configuration for stack to use? Can you give an example?

This is not currently supported. I've opened this issue to track a discussion of the feature https://github.com/commercialhaskell/stack/issues/1922
I think we'd really rather rely on the OS's support for certs / proxies. Is that not working out?

Related

Shiny server is the same as NGNX?

We were doing some research on the configuration of Shiny server then we noticed that the config syntax is virtually the same as Nginx? Does anyone have a confirmation on this? If that is true we plan a different stack system architecture.
Shiny Server and Shiny Server Pro are not based on nginx they are using Node.js to provide their web server functionality. Please see the according answer in the RStudio FAQ.
just had a bit of a poke in their github repository and it looks to be pretty custom code and they're just reusing the syntax/variable naming of Nginx.
config parsing seems to be done in lib/router/config-router.js where you can find references to things like 'log_dir'
I'd therefore probably put some sort of proxy between it and the internet (if that's your plan)

How do you setup a caching proxy on OSX?

When doing web development there are times external resources are referenced in a web page (e.g. google fonts). I would like to cache some of these calls on my Macbook but not cache the code Im working on.
The goal being speed of development and a workaround when working on slow networks (e.g. 3G using a hotspot).
I came across Squid proxy but have not been able to configure it at all. Im open to other suggestions to achieve this goal. Any ideas?
I'm using SquidMan : http://squidman.net
It come with a nice user interface.

Hosting static content on different domain from webservices, how to avoid cross-domain?

We've recently been working on a fairly modern web app and are ready to being deploying it for alpha/beta and getting some real-world experience with it.
We have ASP.Net based web services (Web Api) and a JavaScript front-end which is 100% client-side MVC using backbone.
We have purchased our domain name, and for the sake of this question our deployment looks like this:
webservices.mydomain.com (Webservices)
mydomain.com (JavaScript front-end)
If the JavaScript attempts to talk to the webservices on the sub-domain we blow up with cross domain issues, I've played around with CORS but am not satisfied with the cross browser support so I'm counting this out as an option.
On our development PC's we have used an IIS reverse proxy to forward all requests to mydomain.com/webservices to webservices.mydomain.com - Which solves all our problems as the browser thinks everything is on the same domain.
So my question is, in a public deployment, how is this issue most commonly solved? Is a reverse proxy the right way to do it? If so is there any hosted services that offer a reverse proxy for this situation? Are there better ways of deploying this?
I want to use CloudFront CDN as all our servers/services are hosted with Amazon, I'm really struggling to find info on if a CDN can support this type of setup though.
Thanks
What you are trying to do is cross-subdomain calls, and not entirely cross-domain.
That are tricks for that: http://www.tomhoppe.com/index.php/2008/03/cross-sub-domain-javascript-ajax-iframe-etc/
As asked how this issue is most commonly solved. My answer is: this issue is commonly AVOIDED. In real world you would setup your domains such as you don't need to make such ways around just to get your application running or setup a proxy server to forward the calls for you. JSONP is also a hack-ish solution.
To allow this Web Service to be called from script, using ASP.NET AJAX, add the following line to the first web service code-behind :
[System.Web.Script.Services.ScriptService]
You can simply use JSONP for AJAX requests then cross-domain is not an issue.
If AJAX requests return some HTML, it can be escaped into a JSON string.
The second one is a little bit awkward, though.
You have 2/3 layers
in the web service code-behin class, add this atribute : <System.Web.Script.Services.ScriptService()> _
maybe you need to add this in the System.web node of your web.config:
<webServices>
<protocols>
<add name="AnyHttpSoap"/>
<add name="HttpPost"/>
<add name="HttpGet"/>
</protocols>
</webServices>
In the client-side interface
-Add web reference to the service on the subdomain (exmpl. webservices.mydomain.com/svc.asmx)
Visual studio make the "proxy class"
-add functionality in the masterpage's|page's|control's code behin
-Simply call this functions from client-side
You can use AJAX functionality with scriptmanager or use another system like JQuery.
If your main website is compiled in .NET 3.5 or older, you need to add a reference to the namespace System.Web.Extensions and declare it in your web.config file.
If you have the bandwidth (network I/O and CPU) to handle this, a reverse proxy is an excellent solution. A good reverse proxy will even cache static calls to help mitigate the network delay introduced by the proxy.
The other option is to setup the proper cross domain policy files and/or headers. Doing this in some cloud providers can be hard or even impossible. I recently ran into issues with font files and IE not being happy with cross domain calls. We could not get the cloud storage provider we were using to set the correct headers, so we hosted them locally rather than have to deal with a reverse proxy.
easyXDM is a cross domain Javascript plugin that may be worth exploring. It makes use of standards when the browser supports them, and abstracts away the various hacks required when the browser doesn't support the standards. From easyXDM.net:
easyXDM is a Javascript library that enables you as a developer to
easily work around the limitation set in place by the Same Origin
Policy, in turn making it easy to communicate and expose javascript
API’s across domain boundaries.
At the core easyXDM provides a transport stack capable of passing
string based messages between two windows, a consumer (the main
document) and a provider (a document included using an iframe). It
does this by using one of several available techniques, always
selecting the most efficient one for the current browser. For all
implementations the transport stack offers bi-directionality,
reliability, queueing and sender-verification.
One of the goals of easyXDM is to support all browsers that are in
common use, and to provide the same features for all. One of the
strategies for reaching this is to follow defined standards, plus
using feature detection to assure the use of the most efficient one.
To quote easy XDM's author:
...sites like LinkedIn, Twitter and Disqus as well as applications run
by Nokia and others have built their applications on top of the
messaging framework provided by easyXDM.
So easyXDM is clearly not some poxy hack, but I admit its a big dependency to take on your project.
The current state of the web is that if you want to push the envelop, you have to use feature detection and polyfills, or simply force your users to upgrade to an HTML5 browser. If that makes you squirm, you're not alone, but the polyfills are a kind of temporary evil needed to get from where the web is to where we'd like it to be.
See also this SO question.

IIS 7 replication of web site

Is there any way to replicate website on different servers for load balancing so that some of the requests are served from one server and the others from other?
Did you at least Google for an answer? This subject is well documented all over the web. For a start, check out HTTP Load Balancing using Application Request Routing on iis.net (excellent source for anything related to IIS anyway). The blog post IIS7 Load Balancing & Routing Module Now Available! on MSDN also contains a lot of useful links. Instead of ARR you can also use pretty much any kind of load balancer (e.g. HAProxy).
To make the same content available to all servers in your farm you can simply use a Windows based file server or any kind of NAS with SMB file sharing. IIS allows you to specify the credentials that will be used when connecting to the file share.
Yeah there are heaps of solutions for doing this. The one that Stackoverflow use is called HAProxy http://haproxy.1wt.eu/
Typically what you need is a form of reverse proxy that can pipe your requests to the various servers it knows about.
As an aside - Interesting link about SO architecture. http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html
Yes, there is Web Farm Framework http://www.iis.net/download/WebFarmFramework

wrong protocol for crossdomain.xml in flex app

I've changed the protocol for my flex app from https to http and flashplayer still wants to download the crossdomain.xml using https though with the port for http.
the app is accessed at http://domain01:8080/flex and it wants to get https:..samedomain..:8080/crossdomain.xml (at https:..samedomain..no_port/flex it works fine).
Anyone any idea why?
Thanks a lot,
Daniel
No direct answer as I haven't tried this scenario of specifying a non-default port but a couple of piece of info that might lead you to an answer:
http://learn.adobe.com/wiki/download/attachments/64389123/CrossDomain_PolicyFile_Specification.pdf?version=1
This might be of interest:
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM
"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*.example.com" to-ports="507,516-523"/>
</cross-domain-policy>
or this:
10,0,12,0 site-control's permitted-cross-domain-policies default for non-socket policy files is "master-only"
Maybe try an older version of Flash Player to see if something in the changes from 9->10 is causing the issue then finding the change in the change logs might be easier or perhaps it's a bug in the new version.
Good luck
Shaun
Flex (Atleast 3.5 AFAIK..) gets some identify crisis when you change port and use Https... The security model depends on the port.. I do not know the exact reason for the problem, but my solution was to load the crossdomain file in your app explicitly..
System.security.loadPolicyFile('https://mydomain:port/crossdomain.xml');
When you run into crossdomain issues, it's worth remembering that by using the Security class, you can always take explicit control over what crossdomain.xml file is loaded (in fact, the policy file can have any name you want). The default behavior of loading the policy file from the root of a server can often be too restrictive when dealing with more complex, real-world cases (with load-balancing or reverse proxies, for instance).
Try using:
Security.loadPolicyFile(<URI to the policy file goes here>);
The ASDocs are here and explain it quite well.
By taking control of how policies are loaded, you can gain more freedom and take a lot of the guesswork out of what can otherwise be a painful, frustrating experience. The Flash Player allows you to load multiple policy files which is handy if you need to integrate with more than one service layer (e.g. on one host through HTTPS and another through HTTP).
Good luck,
Taylor

Resources