Configuring plone.recipe.varnish in Plone 4 - plone

I am using plone.recipe.varnish 1.2.2 in my Plone application.
Below is a section of my buildout:
parts =
...
instance
paster
varnish-build
varnish
plonesite
...
[varnish-build]
recipe = zc.recipe.cmmi
url = http://downloads.sourceforge.net/project/varnish/varnish/2.1.3/varnish-2.1.3.tar.gz
[varnish]
recipe = plone.recipe.varnish
daemon = ${buildout:parts-directory}/varnish-build/sbin/varnishd
bind = 127.0.0.1:8000
backends = 127.0.0.1:9000
cache-size = 1G
I cannot conclusively determine if it works. My Plone application serves on port 9000. So I want to test if varnish really works by going to http://localhost:8000 but I get nothing. The browser says "Firefox can't establish a connection to the server at 127.0.0.1:8000."
Am I doing this wrong? I have followed the instructions provided here but no headway.
How does one really configure plone.recipe.varnish in Plone, and how do you actually test that it works in local development machine?

The recipe does not start your varnish server. It only configures it for you.
Use something like supervisord to manage the process, or start it by hand with bin/varnish.

Related

How do I deploy Apache-Airflow via uWSGI and nginx?

I'm trying to deploy airflow in a production environment on a server running nginx and uWSGI.
I've searched the web and found instructions on installing airflow behind a reverse proxy, but those instructions only have nginx config examples. However, due to the permissions, I can't change the nginx.conf itself and have to solve it via uswsgi.
My folder structure is:
project_folder
|_airflow
|_airflow.cfg
|_webserver_config.py
|_wsgi.py
|_env
|_start
|_stop
|_uwsgi.ini
My path/to/myproject/uwsgi.ini file is configured as follows:
[uwsgi]
master = True
http-socket = 127.0.0.1:9999
virtualenv = /path/to/myproject/env/
daemonize = /path/to/myproject/uwsgi.log
pidfile = /path/to/myproject/tmp/myapp.pid
workers = 2
threads = 2
# adjust the following to point to your project
wsgi-file = /path/to/myproject/airflow/wsgi.py
touch-reload = /path/to/myproject/airflow/wsgi.py
and currently the /path/to/myproject/airflow/wsgi.py looks as follows:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b'Hello World!']
I'm assuming I have to somehow call the airflow flask app from the wsgi.py file (perhaps by also changing some reverse proxy fix configs, since I'm behind SSL), but I'm stuck; what do I have to configure?
Will this procedure then be identical for the workers and scheduler?

Ubuntu + nginx - trying to install GeoIP module

I'm using vagrant (VVV actually) to run local wordpress installs. I want to test different behaviors for different GEO's on my local machine instead of upload it every time to the server which is annoying.
So, I've tried to install the GeoIP nginx module to the local machine with the following guide https://piwik.org/faq/how-to/faq_166/ (and a bit more google but it doesn't matter at the moment).
When I'm using ./configure the following is exists:
checking for GeoIP library ... found
checking for GeoIP IPv6 support ... found
I've also set the .dat files in my conf file, and set the $_SERVER (fastcgi_param) parameters - so they displayed when I'm printing the $_SERVER var.
But those GeoIP vars are empty. I'm not sure about the reason, but 2 things is bothering me. First, when I'm write nginx -V in the terminal the argument --with-http_geoip_module is missing. Second, could it actually works if the REMOTE_ADDR (IP) is not my real IP? (192.168.1.50 for example).
nginx is a bit strange for me, so sorry if something isn't exact..
--
Operating system - macOS, nginx version - 1.3.15, running with VVV (vagrant box)
If there is a reverse proxy in front of your nginx, use geoip_proxy to set IPs whose X-Forwarded-For-Header can be trusted.
You can also use that without actually having a reverse proxy when you're developing. Add your local IP to the geoip_proxy-list and set the X-Forwarded-For-Header to your public IP in your browser (use a plugin like Modify Headers).

Gitlab ports 80 & 8080 taken by a separate Gitlab instance?

I have Gitlab 8.6 running on an Ubuntu 14.04 server that seems to have gotten messed up. I consistently get a 502 error when accessing the site. The server likely has not been restarted since installing Gitlab initially, and a power outage caused the server to reboot. Now, I cannot start/restart Gitlab due to what appears to be port conflicts.
I installed Gitlab via source, I don't have any custom port configurations, and am using NGINX. nginx -t shows that the configuration appears to be correct syntax-wise.
When I run netstat -tupln, I see that Unicorn & a Gitlab instance is already running on :8080 and :80 respectively at boot up. I suspect that a 2nd instance of Gitlab was installed which is being run at boot and that is causing the proper instance to have port conflicts when I try to run it via service gitlab restart. I'm not even sure if that's possible, but I can't seem to figure out where to go from here. Every time I run sudo gitlab-ctl reconfigure or service gitlab start, it fails and the unicorn.stderror.log shows bind errors to the :8080 port. I tried moving the Unicorn service to :8081 as well, but I still receive the port binding error.
Does anyone know how I can detect if there are multiple Gitlab instances running, and maybe if there is a way to remove a duplicated one if it's possible? Thank you!
EDIT: Here is what is in the /etc/gitlab/gitlab.rb file. Everything else is commented out.
## Url on which GitLab will be reachable
external_url 'http://my-gitlab-instance.domain.com'
EDIT 2: My /home/git/gitlab/ directory is mapped to https://gitlab.com/gitlab-org/gitlab-ce.git, and is on the 8-7-stable branch. gitlab-shell and gitlab-workhorse are on the correct versions according to https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/8.6-to-8.7.md
EDIT 3: I have gotten to a point where the Gitlab seems to self-check okay by removing the gitlab-ce package (https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135), but the server returns a 404. NGINX, Unicorn, Sidekiq, and gitlab-workhorse all say that they're running. I see that unicorn.rb is listening on :8080, and nginx is listening on 0.0.0.0:80 and :::80. I guess now I'm troubleshooting this 404 and hopefully I will be back to my install-from-source.
What I have found is that there were 2 issues causing the errors I had.
First, I removed a "gitlab-ce" package that was installed, following the instructions here: https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135. For some reason, when I restart the machine now I have to restart these services, in order, for Gitlab to run properly redis-server, gitlab, nginx. However, Gitlab does start responding properly after that.
Second, the 404 error was due to a different server that was also listening on that IP address, causing a conflict.
I will likely move to using the omnibus package on a fresh, new server going forward, but at least the immediate issues appear resolved. Thanks for your help, SLY!

Xdebug with PHPStorm and a Docker container

Setup: Windows 10; Docker running with Boot2Docker on Hyper-V; PHPStorm 9
Webserver on the VM is Nginx. I've configured the xdebug.ini for php5-fpm as:
zend_extension=xdebug.so
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_connect_back=On
xdebug.remote_handler=dbgp
xdebug.profiler_enable=0
If I set a breakpoint and reload the page I get an incoming connection from Xdebug in PHPStorm:
I wonder that there is only one file shown and not the entire project which is much bigger. If I accept the connection I can debug the very first line but it is not stopping on my breakpoint and creates a server entry which looks like:
What is very strange that host is empty.
I already added the server with the correct mapping but it got ignored.
So how to get Xdebug to stop on breakpoints?
What is very strange that host is empty.
PhpStorm requires this field to be filled as it uses this to recognize what server entry (and therefore path mappings) to use -- IDE supports debugging the same code base running on different domains / remote servers.
In this particular case the servername field / parameter of your nginx configuration is empty. You can fix this by providing some value in nginx config file.

No incoming connection for PhpStorm with xdebug (nginx / php-fpm)

I figured I'd try using nginx instead of Apache and see how that works, and I'm up and running, but I cannot for the sake of my life figure out how to make PhpStorm capture the incoming xdebug connection. It worked perfectly when I was running Apache.
Usually, you'd get an "incoming connection"-window in PhpStorm - this window now shines with its absence - and yes; I've read every single PhpStorm / Xdebug question on StackOverflow and neither has solved my issue.
Configuration:
OS: OSX Mavericks
PhpStorm version: 7.1
Xdebug version: 2.2.5
Note that I'm running nginx and PHP through php-fpm which is working as expected. I've pointed out the same PHP-file as php-fpm is running to PhpStorm which successfully finds Xdebug as the debugger.
Since php-fpm is running port 9000 just as Xdebug, I've changed this to 9900 and 9001 (tried both) and made sure to check my phpinfo() to see that the server has updated the php.ini config with it and I've checked that I've updated the Xdebug port in PhpStorm. I've also enabled "Start listening for debug connections" in PhpStorm.
Xdebug config from php.ini:
[xdebug]
zend_extension = /usr/local/Cellar/php55/5.5.14/lib/php/extensions/no-debug-non-zts-20121212/xdebug.so
xdebug.auto_trace=0
xdebug.default_enable=1
xdebug.idekey="PHPSTORM"
xdebug.profiler_enable=0
xdebug.profiler_enable_trigger=0
xdebug.profiler_output_dir="/tmp"
xdebug.remote_enable=on
xdebug.remote_handler=dbgp
xdebug.remote_host=localhost
xdebug.remote_mode=req
xdebug.remote_port=9001
As mentioned - xdebug is loaded when I load phpinfo() in the browser and I've set the correct port in PhpStorm.
Thanks for your help.
I cannot stress enough the importance of one of the remarks - "Don't forget to set the cookie for xdebug".
I had everything right and my debugger still wouldn't attach due to this.
One recommendation I can make is to install xdebug helper chrome extension. Once you have it, start the debug from PhpStorm, navigate to the page you want to debug and turn on the debug setting in the extension by clicking the "bug" icon within address bar.
It seems there was one thing I missed when I changed the settings - stopping to listen for breakpoints and then trying again. This seems to have fixed the issue...
Here is my case.
make sure xdebug is installed. <?php echo (extension_loaded('xdebug') ? '' : 'non '), 'exists';
make sure the port is not used, an example is
nginx server:9000 <-> php-fpm:9000
ide_xdebug:9080 <-> php.ini_xdebug:9080
When having configured xdebug.remote_host= it is sometimes necessary to add also the option xdebug.remote_connect_back=0 .
When PhpStorm complains 'Can't listen to port, port 9000 is busy` it is usually as some other application uses the same port for example via docker expose or ports settings or itself not being in docker.

Resources