Xdebug with PHPStorm and a Docker container - nginx

Setup: Windows 10; Docker running with Boot2Docker on Hyper-V; PHPStorm 9
Webserver on the VM is Nginx. I've configured the xdebug.ini for php5-fpm as:
zend_extension=xdebug.so
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_connect_back=On
xdebug.remote_handler=dbgp
xdebug.profiler_enable=0
If I set a breakpoint and reload the page I get an incoming connection from Xdebug in PHPStorm:
I wonder that there is only one file shown and not the entire project which is much bigger. If I accept the connection I can debug the very first line but it is not stopping on my breakpoint and creates a server entry which looks like:
What is very strange that host is empty.
I already added the server with the correct mapping but it got ignored.
So how to get Xdebug to stop on breakpoints?

What is very strange that host is empty.
PhpStorm requires this field to be filled as it uses this to recognize what server entry (and therefore path mappings) to use -- IDE supports debugging the same code base running on different domains / remote servers.
In this particular case the servername field / parameter of your nginx configuration is empty. You can fix this by providing some value in nginx config file.

Related

Mosquitto: Starting in local only mode

I have a virtual machine that is supposed to be the host, which can receive and send data. The first picture is the error that I'm getting on my main machine (from which I'm trying to send data from). The second picture is the mosquitto log on my virtual machine. Also I'm using the default config, which as far as I know can't cause these problems, at least from what I have seen from other examples. I have very little understanding on how all of this works, so any help is appreciated.
What I have tried on the host machine:
Disabling Windows defender
Adding firewall rules for "mosquitto.exe"
Installing mosquitto on a linux machine
Starting with the release of Mosquitto version 2.0.0 (you are running v2.0.2) the default config will only bind to localhost as a move to a more secure default posture.
If you want to be able to access the broker from other machines you will need to explicitly edit the config files to either add a new listener that binds to the external IP address (or 0.0.0.0) or add a bind entry for the default listener.
By default it will also only allow anonymous connections (without username/password) from localhost, to allow anonymous from remote add:
allow_anonymous true
More details can be found in the 2.0 release notes here
You have to run with
mosquitto -c mosquitto.conf
mosquitto.conf, which exists in the folder same with execution file exists (C:\Program Files\mosquitto etc.), have to include following line.
listener 1883 ip_address_of_the_machine(192.168.1.1 etc.)
By default, the Mosquitto broker will only accept connections from clients on the local machine (the server hosting the broker).
Therefore, a custom configuration needs to be used with your instance of Mosquitto in order to accept connections from remote clients.
On your Windows machine, run a text editor as administrator and paste the following text:
listener 1883
allow_anonymous true
This creates a listener on port 1883 and allows anonymous connections. By default the number of connections is infinite. Save the file to "C:\Program Files\Mosquitto" using a file name with the ".conf" extension such as "your_conf_file.conf".
Open a terminal window and navigate to the mosquitto directory. Run the following command:
mosquitto -v -c your_conf_file.conf
where
-c : specify the broker config file.
-v : verbose mode - enable all logging types. This overrides
any logging options given in the config file.
I found I had to add, not only bind_address ip_address but also had to set allow_anonymous true before devices could connect successfully to MQTT. Of course I understand that a better option would be to set user and password on each device. But that's a next step after everything actually works in the minimum configuration.
For those who use mosquitto with homebrew on Mac.
Adding these two lines to /opt/homebrew/Cellar/mosquitto/2.0.15/etc/mosquitto/mosquitto.conf fixed my issue.
allow_anonymous true
listener 1883
you can run it with the included 'no-auth' config file like so:
mosquitto -c /mosquitto-no-auth.conf
I had the same problem while running it inside docker container (generated with docker-compose).
In docker-compose.yml file this is done with:
command: mosquitto -c /mosquitto-no-auth.conf

Ubuntu + nginx - trying to install GeoIP module

I'm using vagrant (VVV actually) to run local wordpress installs. I want to test different behaviors for different GEO's on my local machine instead of upload it every time to the server which is annoying.
So, I've tried to install the GeoIP nginx module to the local machine with the following guide https://piwik.org/faq/how-to/faq_166/ (and a bit more google but it doesn't matter at the moment).
When I'm using ./configure the following is exists:
checking for GeoIP library ... found
checking for GeoIP IPv6 support ... found
I've also set the .dat files in my conf file, and set the $_SERVER (fastcgi_param) parameters - so they displayed when I'm printing the $_SERVER var.
But those GeoIP vars are empty. I'm not sure about the reason, but 2 things is bothering me. First, when I'm write nginx -V in the terminal the argument --with-http_geoip_module is missing. Second, could it actually works if the REMOTE_ADDR (IP) is not my real IP? (192.168.1.50 for example).
nginx is a bit strange for me, so sorry if something isn't exact..
--
Operating system - macOS, nginx version - 1.3.15, running with VVV (vagrant box)
If there is a reverse proxy in front of your nginx, use geoip_proxy to set IPs whose X-Forwarded-For-Header can be trusted.
You can also use that without actually having a reverse proxy when you're developing. Add your local IP to the geoip_proxy-list and set the X-Forwarded-For-Header to your public IP in your browser (use a plugin like Modify Headers).

Hosting multiple meteor apps on one server

I have 2 meteor apps running on one Ubuntu server on DO. I have also set up nginx for "servering"
Config files:
sailsadria.conf : http://pastebin.com/eCicpNxK
ytp.klancir.work.conf : http://pastebin.com/cNKtA0dV
Now...
http://sailsadria.com/ which is on port 3000 works smoothly as expected while http://ytp.klancir.work/ goes on ngnix root. On the other hand http://ytp.klancir.work:3010 goes to the right app that is working on that port (but I suppose that any URL or the IP i forward with the appended port will end up on the right location)
Symlinks are also set up
The domains are configured:
sailsadria: http://screencast.com/t/iqKUlQlDgj8
ytp.klancir.work: http://screencast.com/t/DJJdLfqna
I dont know how to set up that http://ytp.klancir.work/ goes directly to port 3010 in other words - directly to the app...
The SOLUTION: sudo service nginx restart....

Docker restart not showing the desired effect

I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?

No incoming connection for PhpStorm with xdebug (nginx / php-fpm)

I figured I'd try using nginx instead of Apache and see how that works, and I'm up and running, but I cannot for the sake of my life figure out how to make PhpStorm capture the incoming xdebug connection. It worked perfectly when I was running Apache.
Usually, you'd get an "incoming connection"-window in PhpStorm - this window now shines with its absence - and yes; I've read every single PhpStorm / Xdebug question on StackOverflow and neither has solved my issue.
Configuration:
OS: OSX Mavericks
PhpStorm version: 7.1
Xdebug version: 2.2.5
Note that I'm running nginx and PHP through php-fpm which is working as expected. I've pointed out the same PHP-file as php-fpm is running to PhpStorm which successfully finds Xdebug as the debugger.
Since php-fpm is running port 9000 just as Xdebug, I've changed this to 9900 and 9001 (tried both) and made sure to check my phpinfo() to see that the server has updated the php.ini config with it and I've checked that I've updated the Xdebug port in PhpStorm. I've also enabled "Start listening for debug connections" in PhpStorm.
Xdebug config from php.ini:
[xdebug]
zend_extension = /usr/local/Cellar/php55/5.5.14/lib/php/extensions/no-debug-non-zts-20121212/xdebug.so
xdebug.auto_trace=0
xdebug.default_enable=1
xdebug.idekey="PHPSTORM"
xdebug.profiler_enable=0
xdebug.profiler_enable_trigger=0
xdebug.profiler_output_dir="/tmp"
xdebug.remote_enable=on
xdebug.remote_handler=dbgp
xdebug.remote_host=localhost
xdebug.remote_mode=req
xdebug.remote_port=9001
As mentioned - xdebug is loaded when I load phpinfo() in the browser and I've set the correct port in PhpStorm.
Thanks for your help.
I cannot stress enough the importance of one of the remarks - "Don't forget to set the cookie for xdebug".
I had everything right and my debugger still wouldn't attach due to this.
One recommendation I can make is to install xdebug helper chrome extension. Once you have it, start the debug from PhpStorm, navigate to the page you want to debug and turn on the debug setting in the extension by clicking the "bug" icon within address bar.
It seems there was one thing I missed when I changed the settings - stopping to listen for breakpoints and then trying again. This seems to have fixed the issue...
Here is my case.
make sure xdebug is installed. <?php echo (extension_loaded('xdebug') ? '' : 'non '), 'exists';
make sure the port is not used, an example is
nginx server:9000 <-> php-fpm:9000
ide_xdebug:9080 <-> php.ini_xdebug:9080
When having configured xdebug.remote_host= it is sometimes necessary to add also the option xdebug.remote_connect_back=0 .
When PhpStorm complains 'Can't listen to port, port 9000 is busy` it is usually as some other application uses the same port for example via docker expose or ports settings or itself not being in docker.

Resources