I have project via SSH in PhpStorm.
I'm trying run PHPUnit script with debugger and I see this message:
Connection with 'Xdebug 2.6.0' was not established
In xdebug.log I see:
Log opened at 2018-07-11 08:52:42
I: Connecting to configured address/port: 46.170.254.34:9001.
W: Creating socket for '46.170.254.34:9001', poll success, but error: Operation now in progress (29).
E: Could not connect to client. :-(
Log closed at 2018-07-11 08:52:42
In console I see:
ssh://xxxx#xxxx.pl:22/usr/bin/php -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9001 -dxdebug.remote_host=11.11.11.11 /home/path/vendor/phpunit/phpunit/phpunit --bootstrap /home/path/vendor/autoload.php --configuration /home/path/phpunit.xml --filter "/(::testGettingMe)( .*)?$/" App\Tests\Functional\Context\Api\MeControllerTest /home/path/tests/Functional/Context/Api/MeControllerTest.php --teamcity
Any suggestion?
Related
I have running the IROHA node on my local ubuntu machine with docker and I am able to run all commands using docker shell.
I want to have JS implementation of Iroha so I have run the dockerfile for GRPC but it is not able to connect to IROHA,
error:
WARN[1672] [core] grpc: addrConn.createTransport failed to connect to {dev.localdomain:50051 dev.localdomain:50051 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused". Reconnecting... system=system
[GRPC console][2]
I resolved GrpcWebProxy by making some changes in already provided solution, now you can see it here:
https://github.com/AqeelKazmi/IrohaDockerServices
The issue I've got is that PhpUnit does not function (properly, something does happen) when just plain clicking "Run" (Shift + F10 on Windows) in PhpStorm.
First up, followed tutorials/setup guides:
https://blog.jetbrains.com/phpstorm/2016/11/docker-remote-interpreters/
https://blog.alejandrocelaya.com/2017/02/01/run-phpunit-tests-inside-docker-container-from-phpstorm/
https://stackoverflow.com/a/47578104
So now, pretty much got a working setup, apart from it doesn't.
Testing started at 15:21 ...
[docker://IMAGE_NAME:latest/]:php bin/.phpunit/phpunit-6.5/phpunit --configuration /var/www/html/phpunit.xml.dist --teamcity
PHPUnit 6.5.14 by Sebastian Bergmann and contributors.
Testing Project Test Suite
Fatal error: Uncaught PDOException: SQLSTATE[HYT00]: [unixODBC][Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired in /var/www/html/src/Legacy/Connection/MssqlConnection.php on line 178
PDOException: SQLSTATE[HYT00]: [unixODBC][Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired in /var/www/html/src/Legacy/Connection/MssqlConnection.php on line 178
Call Stack:
0.0003 393408 1. {main}() /var/www/html/bin/.phpunit/phpunit-6.5/phpunit:0
0.0571 923544 2. PHPUnit\TextUI\Command::main() /var/www/html/bin/.phpunit/phpunit-6.5/phpunit:17
0.0571 923656 3. Symfony\Bridge\PhpUnit\Legacy\CommandForV6->run() /var/www/html/bin/.phpunit/phpunit-6.5/src/TextUI/Command.php:148
0.2019 4269152 4. Symfony\Bridge\PhpUnit\Legacy\TestRunnerForV6->doRun() /var/www/html/bin/.phpunit/phpunit-6.5/src/TextUI/Command.php:195
0.2158 4697272 5. PHPUnit\Framework\TestSuite->run() /var/www/html/bin/.phpunit/phpunit-6.5/src/TextUI/TestRunner.php:545
0.2181 4702968 6. PHPUnit\Framework\TestResult->startTestSuite() /var/www/html/bin/.phpunit/phpunit-6.5/src/Framework/TestSuite.php:689
0.2233 4717824 7. App\Tests\Helper\DeleteDBOnceListener->startTestSuite() /var/www/html/bin/.phpunit/phpunit-6.5/src/Framework/TestResult.php:368
0.2270 4739216 8. App\Legacy\Connection\MssqlConnection->databaseExists() /var/www/html/tests/Helper/DeleteDBOnceListener.php:55
0.2270 4739216 9. App\Legacy\Connection\MssqlConnection->findDbFromDSN() /var/www/html/src/Legacy/Connection/MssqlConnection.php:38
0.2271 4740104 10. PDO->__construct() /var/www/html/src/Legacy/Connection/MssqlConnection.php:178
Process finished with exit code 255
Obviously this reads as: cannot connect to DB. But!
If I log into the Docker instance, and then run the command, it works! Command:
php bin/.phpunit/phpunit-6.5/phpunit --configuration /var/www/html/phpunit.xml.dist
Generates output:
user#hash:/var/www/html# php bin/.phpunit/phpunit-6.5/phpunit --configuration /var/www/html/phpunit.xml.dist
PHPUnit 6.5.14 by Sebastian Bergmann and contributors.
Testing Project Test Suite
Dropping current database...
.Creating database..
................................................................ 65 / 80 ( 81%)
............... 80 / 80 (100%)
Time: 3.25 minutes, Memory: 56.12MB
OK (80 tests, 336 assertions)
So why, when executing using "Run", does this fail when doing it from PhpStorm? Did I miss a setting?
Since you asked your question 3 years ago you probably also answered it yourself. However I will leave my answer for people who enter here in future.
The error HYT00 is a timeout error from MSSQL client. This basically means that your client does not have access to the server (or that it responding very slow).
If you configure PHPStorm to use the docker container as an interpreter it's doing exactly this - only using the PHP you have inside the docker container as an interpreter but not running it inside of the container. You are still running it in your host environment, in your host network etc.
Make sure you have access to the database from your host machine and that you are using a correct connection string while running the tests. In your test configuration you may have the internal hostnames of docker (between docker containers you may for example use container names as their hostnames). When running from the host machine you will need an accessible hsotname, like localhost:1433 if you have the ports mapped to host machine.
With this checked the tests should execute correctly.
Just an offtopic - unless you are doing integration tests it's not a good idea to connect to SQL in Unit tests. Better to use mocks:)
I have an Airflow installation (on Kubernetes). My setup uses DaskExecutor. I also configured remote logging to S3. However when the task is running I cannot see the log, and I get this error instead:
*** Log file does not exist: /airflow/logs/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Fetching from: http://airflow-worker-74d75ccd98-6g9h5:8793/log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-74d75ccd98-6g9h5', port=8793): Max retries exceeded with url: /log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7d0668ae80>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Once the task is done, the log is shown correctly.
I believe what Airflow is doing is:
for finished tasks read logs from s3
for running tasks, connect to executor's log server endpoint and show that.
Looks like Airflow is using celery.worker_log_server_port to connect to my dask executor to fetch logs from there.
How to configure DaskExecutor to expose log server endpoint?
my configuration:
core remote_logging True
core remote_base_log_folder s3://some-s3-path
core executor DaskExecutor
dask cluster_address 127.0.0.1:8786
celery worker_log_server_port 8793
what i verified:
- verified that the log file exists and is being written to on the executor while the task is running
- called netstat -tunlp on executor container, but did not find any extra port exposed, where logs could be served from.
UPDATE
have a look at serve_logs airflow cli command - I believe it does exactly the same.
We solved the problem by simply starting a python HTTP handler on a worker.
Dockerfile:
RUN mkdir -p $AIRFLOW_HOME/serve
RUN ln -s $AIRFLOW_HOME/logs $AIRFLOW_HOME/serve/log
worker.sh (run by Docker CMD):
#!/usr/bin/env bash
cd $AIRFLOW_HOME/serve
python3 -m http.server 8793 &
cd -
dask-worker $#
I am installing ICp 2.1.0.1 and I received an error at the TASK
[master: Waiting for MariaDB service to start] msg: The MariaDB
component failed to start.
After this msg the installation completed with failed status.
We are installing ICp with 3 Masters, 3 Proxies and 2 Workers. We have 1 IP for VIP master and 1 for VIP proxy.
I tried to install multiple times and all installations got the same error.
For prior issues with that error, the correct db admin password was not used. So check the db user and password to resolve issue.
Would you validate whether each master host was able to access port 3306 on the other hosts?
If you run with .. install -vv | tee -a install-log.txt, do you get additional details as well?
The error was solved by following the steps below.
Check whether kubelet is running:
Log in to your master node.
Run the following command to check kubelet status:
systemctl status kubelet
If kubelet is not running, run the following command to get the logs:
journalctl -u kubelet &> kubelet.log
We found the error in the kubelet.log log:
Error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
We found this troubleshoot in this link, and the solution at the ICP issue 4651.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/troubleshoot/etcd_fails.html
https://github.ibm.com/IBMPrivateCloud/roadmap/issues/4651
I installed cloudify3.4 according to the cloudify DOCS. When I install the manager, and executed like this:
# cfy bootstrap --install-plugins -p openstack-manager-blueprint.yaml -i openstack-manager-blueprint-inputs.yaml
an error occurred:
[ERROR] Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> Timed out trying to connect to 192.168.17.15 (tried 5 times)
I have already build a extern network 192.168.17.0/24 and I have already installed
cloudify_docker_plugin-1.3.2-py27-none-linux_x86_64-Ubuntu-trusty.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-redhat-Maipo.wgn
cloudify_host_pool_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn
cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-redhat-Maipo.wgn
So, how to solve this error? Thank you to everyone who helped me!
It seems that you can't connect the manager.
Please make sure that you have an ssh connection from the CLI to the manager.
Since you are bootstrapping an Openstack manager you should make sure to have an external IP if you are outside of Openstack or that the CLI is on the same network if you are on Openstack.