Karaf freezing with log at debug when connecting with client - apache-karaf

With Karaf 4.1.3 and Karaf 4.1.4 when I connect as a client.sh (with the log level set to debug) and perform a log:tail I get the following...
2018-01-05 13:38:00,348 | DEBUG | 0]-nio2-thread-1 | Nio2Session | 48 - org.apache.sshd.core - 1.6.0 | handleCompletedWriteCycle(Nio2Session[local=/127.0.0.1:8101, remote=/127.0.0.1:50379]) finished writing len=68
screams out on to the console, and unless I control C the whole karaf container hangs and in the server window I get the following message:
2018-01-05 13:37:58,819 RMI Scheduler(0) ERROR Recursive call to appender PaxOsgi
2018-01-05 13:37:58,819 RMI Scheduler(0) ERROR Recursive call to appender PaxOsgi
2018-01-05 13:37:58,819 RMI Scheduler(0) ERROR Recursive call to appender PaxOsgi
2018-01-05 13:37:58,819 RMI Scheduler(0) ERROR Recursive call to appender PaxOsgi
This is with a clean unzip running oracle 1.8 on both windows and ubuntu.
Any ideas?

Confirmed a bug and ticket raised:
https://issues.apache.org/jira/browse/KARAF-5559

Related

Artifactory service fails to start upon Fedora 35 reboot

I have installed on Fedora 35 jfrog-artifactory-oss (v7.31.11-73111900.x86_64) and enabled it as a system service to start at boot. But whenever I boot up my OS, the server never starts properly. I will always need to kill the PID of the active running Artifactory process. If I then do sudo service artifactory restart it will bring up the server cleanly and everything is good. How can I avoid having to do this little dance? Is there something about OS boot up that is causing Artifactory to get thrown off?
I have looked at console.log when the server is not running properly after bootup, I see some logs like:
2022-01-27T08:35:38.383Z [shell] [INFO] [] [artifactoryManage.sh:69] [main] - Artifactory Tomcat already started
2022-01-27T08:35:43.084Z [jfac] [WARN] [d84d2d549b318495] [o.j.c.ExecutionUtils:165] [pool-9-thread-2] - Retry 900 Elapsed 7.56 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
That shows that the server is not running properly, but doesn't give a clear idea of what to try next. Any suggestions?
2 things to check,
How is the artifactory.service file in the systemd directory
Whenever the OS is rebooted, what is the error seen in the logs, check all the logs.
Hint: From the warning shared, it seems that Router service is not able to start when OS is rebooted, so whenever OS is rebooted and issue comes up check the router-service.log for any errors/warnings.

Apache airflow celery worker server running in dev mode on production build

I have created a production docker image using breeze command line tool provided. However when I run the airflow worker command, I get the following message on the command line.
Breeze command:
./breeze build-image --production-image --python 3.7 --additional-extras=jdbc --additional-python-deps="pandas pymysql" --additional-runtime-apt-deps="default-jre-headless"
Can anyone help on how to move the worker out of development server?
airflow-worker_1 | Starting flask
airflow-worker_1 | * Serving Flask app "airflow.utils.serve_logs" (lazy loading)
airflow-worker_1 | * Environment: production
airflow-worker_1 | WARNING: This is a development server. Do not use it in a production deployment.
airflow-worker_1 | Use a production WSGI server instead.
airflow-worker_1 | * Debug mode: off
airflow-worker_1 | [2021-02-08 21:57:58,409] {_internal.py:113} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
here is a discussion by airflow maintainer in github: https://github.com/apache/airflow/discussions/18519
It's harmless. It's an internal server run by the executor to share logs with the webserver. It's been already corrected in main to use 'production` setup (thought it's not REALLY) needed i this case as the log "traffic" and characteristics is not production-webserver like.
The fix will be released in Airflow 2.2 (~ month from now) .

PhpUnit not executing via PhpStorm remote interpreter in Docker

The issue I've got is that PhpUnit does not function (properly, something does happen) when just plain clicking "Run" (Shift + F10 on Windows) in PhpStorm.
First up, followed tutorials/setup guides:
https://blog.jetbrains.com/phpstorm/2016/11/docker-remote-interpreters/
https://blog.alejandrocelaya.com/2017/02/01/run-phpunit-tests-inside-docker-container-from-phpstorm/
https://stackoverflow.com/a/47578104
So now, pretty much got a working setup, apart from it doesn't.
Testing started at 15:21 ...
[docker://IMAGE_NAME:latest/]:php bin/.phpunit/phpunit-6.5/phpunit --configuration /var/www/html/phpunit.xml.dist --teamcity
PHPUnit 6.5.14 by Sebastian Bergmann and contributors.
Testing Project Test Suite
Fatal error: Uncaught PDOException: SQLSTATE[HYT00]: [unixODBC][Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired in /var/www/html/src/Legacy/Connection/MssqlConnection.php on line 178
PDOException: SQLSTATE[HYT00]: [unixODBC][Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired in /var/www/html/src/Legacy/Connection/MssqlConnection.php on line 178
Call Stack:
0.0003 393408 1. {main}() /var/www/html/bin/.phpunit/phpunit-6.5/phpunit:0
0.0571 923544 2. PHPUnit\TextUI\Command::main() /var/www/html/bin/.phpunit/phpunit-6.5/phpunit:17
0.0571 923656 3. Symfony\Bridge\PhpUnit\Legacy\CommandForV6->run() /var/www/html/bin/.phpunit/phpunit-6.5/src/TextUI/Command.php:148
0.2019 4269152 4. Symfony\Bridge\PhpUnit\Legacy\TestRunnerForV6->doRun() /var/www/html/bin/.phpunit/phpunit-6.5/src/TextUI/Command.php:195
0.2158 4697272 5. PHPUnit\Framework\TestSuite->run() /var/www/html/bin/.phpunit/phpunit-6.5/src/TextUI/TestRunner.php:545
0.2181 4702968 6. PHPUnit\Framework\TestResult->startTestSuite() /var/www/html/bin/.phpunit/phpunit-6.5/src/Framework/TestSuite.php:689
0.2233 4717824 7. App\Tests\Helper\DeleteDBOnceListener->startTestSuite() /var/www/html/bin/.phpunit/phpunit-6.5/src/Framework/TestResult.php:368
0.2270 4739216 8. App\Legacy\Connection\MssqlConnection->databaseExists() /var/www/html/tests/Helper/DeleteDBOnceListener.php:55
0.2270 4739216 9. App\Legacy\Connection\MssqlConnection->findDbFromDSN() /var/www/html/src/Legacy/Connection/MssqlConnection.php:38
0.2271 4740104 10. PDO->__construct() /var/www/html/src/Legacy/Connection/MssqlConnection.php:178
Process finished with exit code 255
Obviously this reads as: cannot connect to DB. But!
If I log into the Docker instance, and then run the command, it works! Command:
php bin/.phpunit/phpunit-6.5/phpunit --configuration /var/www/html/phpunit.xml.dist
Generates output:
user#hash:/var/www/html# php bin/.phpunit/phpunit-6.5/phpunit --configuration /var/www/html/phpunit.xml.dist
PHPUnit 6.5.14 by Sebastian Bergmann and contributors.
Testing Project Test Suite
Dropping current database...
.Creating database..
................................................................ 65 / 80 ( 81%)
............... 80 / 80 (100%)
Time: 3.25 minutes, Memory: 56.12MB
OK (80 tests, 336 assertions)
So why, when executing using "Run", does this fail when doing it from PhpStorm? Did I miss a setting?
Since you asked your question 3 years ago you probably also answered it yourself. However I will leave my answer for people who enter here in future.
The error HYT00 is a timeout error from MSSQL client. This basically means that your client does not have access to the server (or that it responding very slow).
If you configure PHPStorm to use the docker container as an interpreter it's doing exactly this - only using the PHP you have inside the docker container as an interpreter but not running it inside of the container. You are still running it in your host environment, in your host network etc.
Make sure you have access to the database from your host machine and that you are using a correct connection string while running the tests. In your test configuration you may have the internal hostnames of docker (between docker containers you may for example use container names as their hostnames). When running from the host machine you will need an accessible hsotname, like localhost:1433 if you have the ports mapped to host machine.
With this checked the tests should execute correctly.
Just an offtopic - unless you are doing integration tests it's not a good idea to connect to SQL in Unit tests. Better to use mocks:)

chef recipe unable install nginx on RHEL 7.3

I am trying use the package manager in chef and installing nginx server but every time i ran the cook book on my client it is just saying that
Recipe: nginx::default
* yum_package[nginx] action install[2017-03-11T06:16:01-05:00] INFO: Processing yum_package[nginx] action install (nginx::default line 11)
* No candidate version available for nginx
================================================================================
Error executing action `install` on resource 'yum_package[nginx]'
================================================================================
Chef::Exceptions::Package
-------------------------
No candidate version available for nginx
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/nginx/recipes/default.rb
11: package "nginx" do
12: action :install
13: end
14:
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/nginx/recipes/default.rb:11:in `from_file'
yum_package("nginx") do
package_name "nginx"
action [:install]
retries 0
retry_delay 2
default_guard_interpreter :default
declared_type :package
cookbook_name "nginx"
recipe_name "default"
flush_cache {:before=>false, :after=>false}
end
Platform:
---------
x86_64-linux
[2017-03-11T06:16:29-05:00] INFO: Running queued delayed notifications before re-raising exception
Running handlers:
[2017-03-11T06:16:29-05:00] ERROR: Running exception handlers
Running handlers complete
[2017-03-11T06:16:29-05:00] ERROR: Exception handlers complete
Chef Client failed. 1 resources updated in 05 minutes 44 seconds
And even i tried to install epel-release packages but denied with similar errors.
Any idea how we could install nginx with CHEF recipe.
I tried also using yum_package but had no luck in installing
yum_package "nginx" do
action :install
end
Thanks
This means there is no package named nginx in your repositories. If you log into the machine you want to provision (With kitchen login, for example), you can try to search package nginx.
The best way to install it if it is not in your repositories is either adding nginx´s official repo with a chef repository resource (Like yum_repository for Centos) or downloading the tarball with Chef resource remote_file.
If you choose the last option, be sure to generate a sha256 of the tarball you download and add it to the remote_file resource, so among other things you prevent Chef from downloading every run the file.
-Edit-
As Szymon says, you can also use the Nginx cookbook for this and don't write any special recipe.
As discussed here, you can use official NGINX chef cookbook or just install epel-release before installing NGINX:
if platform_family?('rhel')
package 'epel-release'
end
if platform_family?('debian')
apt_update 'update'
end
package 'nginx'

WebLogic OBIEE Scheduler Component Down

I have an OBIEE 11g installation in a Red Hat machine, but I'm finding problems to make it running. I can start WebLogic and its services, so I’m able to enter the WebLogic console and Enterprise Manager, but problems come when I try to start OBIEE components with opmnctl command.
The steps I’m performing are the following:
1) Start WebLogic
cd /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/bin/
./startWebLogic.sh
2) Start NodeManager
cd /home/Oracle/Middleware/wlserver_10.3/server/bin/
./startNodeManager.sh
3) Start Managed WebLogic
cd /home/Oracle/Middleware/user_projects/domains/bifoundation_domain/bin/
./startManagedWebLogic.sh bi_server1
4) Set up OBIEE Components
cd /home/Oracle/Middleware/instances/instance1/bin/
./opmnctl startall
The result is:
opmnctl startall: starting opmn and all managed processes...
================================================================================
opmn id=JustiziaInf.mmmmm.mmmmm.9999
Response: 4 of 5 processes started.
ias-instance id=instance1
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ias-component/process-type/process-set:
coreapplication_obisch1/OracleBISchedulerComponent/coreapplication_obisch1/
Error
--> Process (index=1,uid=1064189424,pid=4396)
failed to start a managed process after the maximum retry limit
Log:
/home/Oracle/Middleware/instances/instance1/diagnostics/logs/OracleBISchedulerComponent/
coreapplication_obisch1/console~coreapplication_obisch1~1.log
5) Check the status of components
cd /home/Oracle/Middleware/instances/instance1/bin/
./opmnctl status
Processes in Instance: instance1
---------------------------------+--------------------+---------+---------
ias-component | process-type | pid | status
---------------------------------+--------------------+---------+---------
coreapplication_obiccs1 | OracleBIClusterCo~ | 8221 | Alive
coreapplication_obisch1 | OracleBIScheduler~ | N/A | Down
coreapplication_obijh1 | OracleBIJavaHostC~ | 8726 | Alive
coreapplication_obips1 | OracleBIPresentat~ | 6921 | Alive
coreapplication_obis1 | OracleBIServerCom~ | 7348 | Alive
Read the log files from /home/Oracle/Middleware/instances/instance1/diagnostics/logs/OracleBISchedulerComponent/
coreapplication_obisch1/console~coreapplication_obisch1~1.log.
I would recommend trying the the steps in the below link as this is a common issue when upgrading OBIEE.
http://www.askjohnobiee.com/2012/11/fyi-opmnctl-failed-to-start-managed.html
Not sure, what your log says, but try these below steps and check if it works or not
Login as superuser
cd $ORACLE_HOME/Apache/Apache/bin
chmod 6750 .apachectl
logout and login as ORACLE user
opmnctl startproc process-type=OracleBIScheduler

Resources