I'm trying to use Meteor from behind a proxy. I've tried setting environment variables for proxy details as the docs but it has not helped.
SET HTTP_PROXY=http://user:password#1.2.3.4:5678
SET HTTPS_PROXY=http://user:password#1.2.3.4:5678
meteor update
Instead I want to bypass the proxy for the certain URLs that Meteor needs. I have identified atmospherejs.com and registry.npmjs.org (not sure if meteor uses this directly but we are also working with node separately).
Are there any other URLs that Meteor will need?
Here are some:
docs.meteor.com
s3-1.amazonaws.com
activity.meteor.com
warehouse.meteor.com
registry.npmjs.org
registry.npmjs.org
packages.meteor.com
I can't imagine that it's a specific domain causing your problem tough.
Related
I have an Ubuntu VM running docker, with nopCommerce 4.30 and nginx. I wanted to add an authentication plugin, but I found out that there is a known issue https://github.com/nopSolutions/nopCommerce/issues/5584 that prevents these plugins from working while behind a reverse proxy. I am not able to update to 4.50 (version where the issue is fixed) or make changes to the current 4.30 image, other that changing configs like web.config and appsettings.json.
I need a way to fix this return_url address issue or some way to work around it.
One of the ideas I wanted to try is using nginx to replace the http with https in the request, but I do not know how to do it or if there is some sort of check that would prevent that.
Update: The nginx https replacement did not work, seems there is some kind of anti-tampering built in
From the HttpsRequirementAttribute action filter, you can see how nopCommerce handle the http==>https.
Please go to the bellow location
src==>Presentation==>Nop.Web.Framework==>Mvc==>Filters==>HttpsRequirementAttribute.cs
here you will see the switch case for HTTP to HTTPS. We comment out this code and handle it from the load balancer and web. config for http to https.
Also at the appsetting.json, we change something like bellow
"Hosting": {
"UseHttpClusterHttps": false,
"UseHttpXForwardedProto": false,
"ForwardedHttpHeader": ""
},
Hope this information will help you to deep down more about the issue.
Problem: CSS was being applied to the site but after switching to reverse proxy and adding a security cert while changing nothing else, the CSS no longer loads.
Details: Initially the website was using keter only with no security cert or reverse proxy. The site worked fine as intended. The yesod-devel command correctly renders the site. Once compiled, the styling does not appear on the final site. Before the switch to reverse proxy, everything worked as desired. No other changes were made except to config files related to the reverse proxy and security cert.
Dependencies: The dependencies are at this https://gist.github.com/xave/9cdf396c1918c129aff927ab8999d456.
Workflow: The main dev machine is macOS. The server is Ubuntu. The workflow is to develop and preview on macOS, then to compile and deploy on Ubuntu.
Thoughts:All the CSS is on the page upon inspection. It just isn’t applying. This is true for multiple browsers and even for people who had never before visited the site (so not a caching issue).
Any help would be appreciated and please let me know if you need additional information.
I have a Ghost site at https://msclouddeveloper.com
However, the navigation links point to an azurewebsites.net address. But in the Ghost CMS the correct custom domain is shown (msclouddeveloper.com).
The websiteUrl appsetting is also set correctly.
What could be wrong? I've already restarted the app service.
Don't know if I missed this when reading the setup guide for custom domains, but you need to add two AppSettings:
websiteUrl http://www.msclouddeveloper.com
websiteUrlSSL https://www.msclouddeveloper.com
Then restart the App service.
What could also be an issue is the environment setup you have enabled. See this documentation for more info https://docs.ghost.org/docs/config
For custom domain SSL links to work you need to have the production environment enabled. You set this in the file iis-node.yaml I believe. But it was already correct in my case.
I want to simply POST and PUT image files to a server test.com, using apache2. This should result in storing the image file at the desired location. Lets say /srv/web/images/
What would be a working vhost configuration? Are there any modules that need to be activated? I am using apache2 on an ubuntu 12.04 server.
I think by default this is disabled in apache for obvious resons, nobdoy should be able to write to a server in the default configuration.
I want to simply secure it with http auth ("user1" / "pass1").
All the documentation and questions I find deal with PHP, but I thought should be possible using simply a REST URI and apache2 without PHP or cgi or a C program.
(Note: I am interessted in a solution without webdav (but I am not sure if the webdav module supports HTTP PUT which would be OK). And this question has got nothing to with forms or browsers, As an example the upload could use the program curl.)
Update: I found this message:
http://mail-archives.apache.org/mod_mbox/ant-ivy-user/201004.mbox/%3C4BBCA487.8000401#nitido.com%3E
It seems there once was a module mod_put, which is no longer in the repos of ubuntu. And that webdav has built in the functionality of PUT and DELETE.
If so, I am still unable to understand how to do the right VHOST configuration for simple FILE PUT operations.
I think nowadays for this kind of stuff you configure apache with WebDAV.
I am using node.js on my rackspace server to serve my various applications. (Using node-http-proxy).
However, i would like to start a wordpress blog. The only way to serve the blog is via apache (or nginx).
Is there a way to server my wordpress blog from a node.js application itself?
You need some server running to execute the PHP. Node is JavaScript.
Whether that's apache, or nginx/php-fpm or just php-fpm, you need something to actually run the wordpress code, then use the same proxying system you are using now.
One option is to continue to use Wordpress as you normally do, but instead of writing the templates to output HTML, you make them output JSON. With this minor trick, you suddenly have created your own API to output your wordpress content. In contrast with the modules that expose wordpress complete set of methods, this will create your very specific output, tailored after your needs.
To consume your JSON output, you set up a small nodejs server that forwards each call directly to your Wordpress solution, takes the response (JSON) and merges it with your html using whatever javascript template engine you like. You also gain speed, since you can cache the JSON result pretty easily on the node side, and control.
I've written a blogpost about this if you like to read more, and also created a nodejs express middleware to help setting up the node side.
http://www.1001.io/improve-wordpress-with-nodejs/
You can try express-php-fpm package.
It combines Express (Node.js server) and FastCGI gateway to serve php requests.
I found this node module while searching for Wordpress + Node:
https://github.com/scottgonzalez/node-wordpress
I haven't tried it, though, but if you know what you're doing you might want to give it a go.
I recently needed to get a server within an electron app to serve PHP.
I started with grunt-php by Sindre Sorhus. The main change I made was to remove the code that kills the server process when grunt is done, instead instantiating the PHP class from JS and calling the process as needed.
Ultimately, it was very easy to adapt grunt-php to enable PHP on a node.js server.
WordPress now has an "official" (to be precise: open source, under Automattic's github repo) way to do this: wpcom.js. From that github page:
Official JavaScript library for the WordPress.com REST API. Compatible with Node.js and web browsers.
The essence is to call the WordPress REST API from JS.