How can I get niginx fastcgi to run requests in parallel? - nginx

I'm trying to get nginx to serve more than one connection at a time, with a fasgcgi backend.
This stackoverflow answer might contain the answer, but neglects to say where that option could be configured. All the options I see are in config files. Where would I put command line options like "-c 2"? It's not nginx -c, that's config. I don't see anyplace that looks like it would take command line options.

Ok, it looks like I don't need the above linked answer. The setting is
FCGI_CHILDREN
And the reason I had a bit of finding this is that this setting is not in nginx's config, it's in fcgiwrap's config. That is (on my machine) in /etc/init.d/fcgiwrap. Change FCGI_CHILDREN to something larger than 1.
FCGI_CHILDREN="5"
Just changing that to greater than one allow me to run more than one request at a time.
The linked answer mentions
if you are using spawn-fcgi to launch fcgiwrap then you need to use -f "/usr/bin/fcgiwrap -c 5"
but I did not have to do that.

Related

Unable to change options for ngx_pagespeed

I was able to successfully add ngx_pagespeed to my Nginx server at Digital Ocean. I did an automated install per this: https://www.modpagespeed.com/doc/build_ngx_pagespeed_from_source
The module works - for example I can see it is automatically converting my .jpg images to .webp. Also, curl -I -p http://localhost| grep X-Page-Speed returns the X-Page-Speed: 1.13.35.2-0 header.
However, I’m not able to edit any options. When I try to run something like pagespeed rewrite_images on, or even pagespeed on, I get an error pagespeed: command not found.
Per documentation pagespeed should be the command for Nginx: https://modpagespeed.com/doc/configuration
I tried a couple of other commands:
whereis pagespeed returns pagespeed:
which pagespeed returns nothing.
As far as I know these should be returning the full path, something like /usr/bin/pagespeed
Problem was that for some reason I thought pagespeed flags were turned on/off via terminal commands. This assumption is wrong. It actually needs to be done as Nginx directives, added to nginx.conf file and restart the Nginx server.

How should my RPM deal with the fact that the nginx rpm needs config files modified to run?

I am writing a very basic RPM that does nothing more then drop off a simple GUI onto a system. It require nginx, drop some code into it's html directory, and drops a conf file into it's conf.d directory. Most of the time this will likely be run on a VM or fresh box with little else installed.
While testing my RPM I noticed that the nginx that it installs fails out of the box. The problem is that it's default.conf directory uses IPV6 address instead of IPV4 and the machine does not have an IPV6 address set, I gaurentee none of the machines this code is installed on will ever have IPV6 set.
The fix is very simple, but my question is about good protocol. I'm guessing it would usually be considered wrong to have my RPM modifying the default.conf of the nginx file to fix the line causing the exception, but at the same time if I don't my RPM will not function out of the box without someone manually making a tweak to the configuration files. How 'wrong' is it to overwrite the default files if I'm mostly confident that I'll be installed on machines that don't have iPV6 addresses?
I'd check if you can drop something in conf.d to override the bad settings.
Otherwise...
Your %post can modify it with something like sed. Then put a flag there indicating you did, so your %postun can try to clean up afterwards.

Change Jenkins basepath

I'm trying to serve a stock jenkins installation (on Amazon Linux AMI) thru myjenkinsinstance:8080/jenkins (rather than myjenkinsinstance:8080), and then proxy this with e.g. Nginx (over HTTP).
This question has been 'answered' before, but the solution doesn't seem to be relevant anymore.
#admins I would prefer to comment on that thread (specifically this 'answer'), rather than opening a duplicate, but I am not allowed to, per my 'reputation' score (as my comment would not be a solution at all, but further request for help).
From the closest thing to an answer I've seen:
Go to Jenkins Home Directory ( I have mine in C:\Jenkins)
Edit jenkins.xml
Add this --prefix=/jenkins to the end of the argument as show below and restart the jenkins service ALL worked OK for me !
Example : <arguments>-Xrs-Xmx256mDhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8080 --prefix=/jenkins</arguments>
Open Url http://localhost:8080/jenkins this should bring up the home page of jenkins
there is no 'jenkins.xml' in the $JENKINS_HOME directory, but there is a config.xml
there is no <arguments/> entry in the config.xml
there seems to be no other configuration for the initial installation
There's also a 'Jenkins Location > Jenkins URL' setting in the "Configure System" settings (myjenkinsinstance/configure), but modifying this seems to have no noticeable affect.
The end goal would be to automate this installation via e.g. CloudFormation (as part of the EC2's UserData).
Any suggestions would be greatly appreciated.
On your linux system, you need to find the jenkins default config file located at
/etc/default/jenkins
and then add the following arguments according to your requirements. This is a rough idea.
JENKINS_ARGS="--webroot=/var/cache/jenkins/war --prefix=/jenkins
--httpPort=$HTTP_PORT --ajp13Port=$AJP_PORT"
This should work most likely. If it doesnt, pls update your answer with the current arguments present. This works fine for Debian/Ubuntu.
Also you are running jenkins on your windows machine or linux?
So my 'solution' was to use sed and insert some lines into /etc/nginx/nginx.conf and /etc/init.d/jenkins.
e.g.
sed -i '/^ location \/ {/aproxy_pass http://127.0.0.1:8080/;' /etc/nginx/nginx.conf
sed -i '/^PARAMS=/ s/"$/ --prefix=\/jenkins"/' /etc/init.d/jenkins
I highly doubt this is anything near a 'best practice', but it seems to work for now (what happens were I to update with yum... I'm not sure, but the plan is to back the instance with an Elastic Filesystem, which hopefully will allow us to consider the jenkins instance ephemeral, anyway).

Automounting riofs in ubuntu

I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!

How to start Jetty from command line with different ports for http and https

I already use -Djetty.port=xxx to set the http port on command line but I also need to specify a different port for https. I got some hints online about jetty.ssl.port and aleady tried -Djetty.ssl.port=yyy but that did not work.
As to why provide ports on command line versus the config xml file, it's because depending on some conditions I need to start Jetty on certain ports.
I'm using Jetty 6.1-SNAPSHOT.
Ultimately I need something like: java -Djetty.port=XXX -Djetty.ssl.port=YYY -jar start.jar
Note...that is really old version of jetty, we are releasing milestones for jetty 9 today even.
regardless, look in the jetty.xml and you should see where there is a property defined for jetty.port, just make a similar property for jetty.ssl.port or the like and then use that.
the jetty.xml file should be very easy to read, though thinking back you might need to look in the jetty-ssl.xml file.
first do:
mvn package
To start the server with default port: 8080 do
mvn jetty: run
To specify an alternative port: 8090
mvn jetty: run -Djetty.port=8090
To specify multiple transfer protocol port
mvn jetty: run -Djetty.port=8090 -Djetty.ssl.port=8555
Simply execute the following from the project directory:
mvn -Djetty.port=8686 jetty:run

Resources