Today I updated my webserver, and while I do have a backup, I wish to resolve the following issue.
After a reboot after the apt-get update and apt-get upgrade, NGINX started to ignore the virtual-hosts.
The domains are still operational, but they are directed to the /usr/share/nginx/www/html/index.php file apparently, which contains: 'phpinfo();' in php tags.
The 'default' file for virtual-hosts and original conf settings are not present, only my customized nginx.conf. Therefore I have already ran out of possible issue's I can think of.
After the upgrade it seems that indeed some files in etc/nginx/ were changed, I replaced all files in there with my backup and it works just fine again.
Not much of an answer maybe, but if you have a backup, than you can fix the issue.
And it at least gives you a starting point as for where to look to fix the issue by hand.
Related
I've been trying so hard to resolve this myself over the last week or so but I'm banging my head against a brick wall now!
Long story short, I'm getting a CORS error and I've been trying to fix the issue using the add_header 'Access-Control-Allow-Origin' '*' always; command in every place I can think of and with commas and single quotes in various places. My nginx install is a mainline release 1.15.3 on CentOS 7.
I soon figured that this module must be installed by default and adding the above line to the location / {} block in default.conf should be fine. I then added the extension to Firefox to make sure it worked (as a temp measure) and found that I got a 500 error. So now (not sure why) I have the extension disabled and the above line added to the default.conf file and I'm still getting the 500 error - which is good in a way! Now I want to install the -more- module to get over the 500 error I am now getting, as to my understanding the basic header module cannot deal with 500 errors.
My biggest problem at the moment is not being able to add this module to my nginx install. Looking online I see instructions for recompiling nginx with the module using the ./configure command (./configure --prefix=/opt/nginx --add-module=/path/to/headers-more-nginx-module) but I have no configure file in my nginx root directory (which is actually etc/nginx). I have no idea how to handle this really and could do with some advice please.
I did run nginx -V to list the modules that where supposedly installed with my instance of nginx but there was no mention of any header module so my assumption of the basic module being installed out of the box could also be wrong.
I am writing a very basic RPM that does nothing more then drop off a simple GUI onto a system. It require nginx, drop some code into it's html directory, and drops a conf file into it's conf.d directory. Most of the time this will likely be run on a VM or fresh box with little else installed.
While testing my RPM I noticed that the nginx that it installs fails out of the box. The problem is that it's default.conf directory uses IPV6 address instead of IPV4 and the machine does not have an IPV6 address set, I gaurentee none of the machines this code is installed on will ever have IPV6 set.
The fix is very simple, but my question is about good protocol. I'm guessing it would usually be considered wrong to have my RPM modifying the default.conf of the nginx file to fix the line causing the exception, but at the same time if I don't my RPM will not function out of the box without someone manually making a tweak to the configuration files. How 'wrong' is it to overwrite the default files if I'm mostly confident that I'll be installed on machines that don't have iPV6 addresses?
I'd check if you can drop something in conf.d to override the bad settings.
Otherwise...
Your %post can modify it with something like sed. Then put a flag there indicating you did, so your %postun can try to clean up afterwards.
I'm trying to serve a stock jenkins installation (on Amazon Linux AMI) thru myjenkinsinstance:8080/jenkins (rather than myjenkinsinstance:8080), and then proxy this with e.g. Nginx (over HTTP).
This question has been 'answered' before, but the solution doesn't seem to be relevant anymore.
#admins I would prefer to comment on that thread (specifically this 'answer'), rather than opening a duplicate, but I am not allowed to, per my 'reputation' score (as my comment would not be a solution at all, but further request for help).
From the closest thing to an answer I've seen:
Go to Jenkins Home Directory ( I have mine in C:\Jenkins)
Edit jenkins.xml
Add this --prefix=/jenkins to the end of the argument as show below and restart the jenkins service ALL worked OK for me !
Example : <arguments>-Xrs-Xmx256mDhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080 --prefix=/jenkins</arguments>
Open Url http://localhost:8080/jenkins this should bring up the home page of jenkins
there is no 'jenkins.xml' in the $JENKINS_HOME directory, but there is a config.xml
there is no <arguments/> entry in the config.xml
there seems to be no other configuration for the initial installation
There's also a 'Jenkins Location > Jenkins URL' setting in the "Configure System" settings (myjenkinsinstance/configure), but modifying this seems to have no noticeable affect.
The end goal would be to automate this installation via e.g. CloudFormation (as part of the EC2's UserData).
Any suggestions would be greatly appreciated.
On your linux system, you need to find the jenkins default config file located at
/etc/default/jenkins
and then add the following arguments according to your requirements. This is a rough idea.
JENKINS_ARGS="--webroot=/var/cache/jenkins/war --prefix=/jenkins
--httpPort=$HTTP_PORT --ajp13Port=$AJP_PORT"
This should work most likely. If it doesnt, pls update your answer with the current arguments present. This works fine for Debian/Ubuntu.
Also you are running jenkins on your windows machine or linux?
So my 'solution' was to use sed and insert some lines into /etc/nginx/nginx.conf and /etc/init.d/jenkins.
e.g.
sed -i '/^ location \/ {/aproxy_pass http://127.0.0.1:8080/;' /etc/nginx/nginx.conf
sed -i '/^PARAMS=/ s/"$/ --prefix=\/jenkins"/' /etc/init.d/jenkins
I highly doubt this is anything near a 'best practice', but it seems to work for now (what happens were I to update with yum... I'm not sure, but the plan is to back the instance with an Elastic Filesystem, which hopefully will allow us to consider the jenkins instance ephemeral, anyway).
I am having an issue where I do meteor run in my project, and it begins to install meteor-tool#1.4.0-1, once it is 100%, it says
Extracting meteor-tool#1.4.0-1...
but it never finishes. I uninstalled meteor and reinstalled it but I am having the same issue.
United State.
Windows 10.
This is a problem caused by the tar extractor provided by Git.
Find where is located the tar tool used by your system, running:
$ where tar
In my case, it is located in C:\Program Files\Git\usr\bin\tar.exe
Then locate the file and rename it to tar.exe.old
Its done... try running Meteor again! >> $ meteor
Thank you for your response, Vasil. I actually was able to find a solution and I am no longer experiencing this problem.
Turned out there was a problem with the tar.exe file in Git, and by uninstalling Git, and reinstalling it the latest version, 2.10.0.windows.1, the problem has seemed to go away.
It seemed that no matter how long I left it, it stayed stuck at "Extracting meteor-tools . . ." but now that I updated Git the problem has gone away.
Try adding the following to your local hosts file (C:\Windows\System32\Drivers\etc\hosts):
54.192.225.217 warehouse.meteor.com
Then run a meteor reset in your app directory (warning - will wipe your local DB), then try starting your app again.
This works for me.
Link: https://forums.meteor.com/t/downloading-meteor-tool-1-4-0-1/27269/19?u=lucianopestana
I've got a problem when i try to install Symfony 2:
I extract the content of the tgz file into var/www/html directory and then i go on http://127.0.0.1/symfony/web/config.php and it says that i need to change the permissions of app/cache/* and app/logs/*. The problem is that I tried all the solutions in the doc
Which solutions exactly did you try? Did you chmod those folders?
If your web server and command line user are different, that can also cause problems with those two folders. Check the docs for their umask solution and see if that helps.
I had the same problem and SELinux was responsible.
To figure out if SELinux is the source of your problem, turn it off by typing: setenforce Permissive on the command line.
If the script works, then you have to configure SELinux properly.