Cannot update Drupal 9 db and locale with Drush 10 - drupal

I'm using drush locally without any problems, but on my hosting there is an error with locale and db update. I checked so many things and nothing. Strange is that commands like config import/export, sql:dump, drush status works fine, and there is a working sql connection.
Full output:
php74 vendor/bin/drush locale:import pl ../translations/custom-translations.pl.po --type=customized --override=all --debug
[preflight] Config paths: /home/pathtomywebsite/vendor/drush/drush/drush.yml
[preflight] Alias paths: /home/pathtomywebsite/web/drush/sites,/home/pathtomywebsite/drush/sites
[preflight] Commandfile search paths: /home/pathtomywebsite/vendor/drush/drush/src
[debug] Bootstrap further to find locale:import [0.07 sec, 8.75 MB]
[debug] Trying to bootstrap as far as we can [0.07 sec, 8.75 MB]
[debug] Drush bootstrap phase: bootstrapDrupalRoot() [0.07 sec, 8.75 MB]
[debug] Change working directory to /home/pathtomywebsite/web [0.07 sec, 8.75 MB]
[debug] Initialized Drupal 9.2.0 root directory at /home/pathtomywebsite/web [0.07 sec, 8.75 MB]
[debug] Drush bootstrap phase: bootstrapDrupalSite() [0.07 sec, 9.08 MB]
[debug] Initialized Drupal site default at sites/default [0.08 sec, 9.31 MB]
[debug] Drush bootstrap phase: bootstrapDrupalConfiguration() [0.08 sec, 9.31 MB]
[debug] Add service modifier [0.08 sec, 9.49 MB]
[debug] Drush bootstrap phase: bootstrapDrupalDatabase() [0.08 sec, 9.96 MB]
[debug] Successfully connected to the Drupal database. [0.08 sec, 9.96 MB]
[debug] Drush bootstrap phase: bootstrapDrupalFull() [0.08 sec, 9.96 MB]
[debug] Start bootstrap of the Drupal Kernel. [0.08 sec, 9.96 MB]
[debug] Finished bootstrap of the Drupal Kernel. [0.15 sec, 16.23 MB]
[debug] Add a command: twig-tweak:validate [0.2 sec, 21.52 MB]
[debug] Add a command: twig-tweak:debug [0.2 sec, 21.52 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\config\ConfigCommands [0.22 sec, 23.4 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\config\ConfigExportCommands [0.22 sec, 23.43 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\config\ConfigImportCommands [0.22 sec, 23.44 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\BatchCommands [0.22 sec, 23.45 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\CliCommands [0.22 sec, 23.45 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\DrupalCommands [0.22 sec, 23.46 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\DeployHookCommands [0.22 sec, 23.47 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\EntityCommands [0.22 sec, 23.48 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\ImageCommands [0.22 sec, 23.49 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\JsonapiCommands [0.22 sec, 23.5 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\LanguageCommands [0.22 sec, 23.5 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\LocaleCommands [0.22 sec, 23.51 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\MessengerCommands [0.22 sec, 23.53 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\MigrateRunnerCommands [0.22 sec, 23.54 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\QueueCommands [0.22 sec, 23.59 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\RoleCommands [0.22 sec, 23.6 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\StateCommands [0.23 sec, 23.62 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\TwigCommands [0.23 sec, 23.64 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\UserCommands [0.23 sec, 23.64 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\ViewsCommands [0.23 sec, 23.69 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\WatchdogCommands [0.23 sec, 23.71 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\pm\PmCommands [0.23 sec, 23.74 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\pm\ThemeCommands [0.23 sec, 23.76 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeCommands [0.23 sec, 23.76 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeCommentsCommands [0.23 sec, 23.77 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeSessionsCommands [0.23 sec, 23.77 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeUserFieldsCommands [0.23 sec, 23.77 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeUserTableCommands [0.23 sec, 23.78 MB]
[debug] Add a commandfile class: Drupal\entity_reference_revisions\Commands\EntityReferenceRevisionsCommands [0.23 sec, 23.78 MB]
[debug] Add a commandfile class: Drupal\token\Commands\TokenCommands [0.23 sec, 23.79 MB]
[debug] Add a commandfile class: Drupal\pathauto\Commands\PathautoCommands [0.23 sec, 23.79 MB]
[debug] Done with bootstrap max in Application::bootstrapAndFind(): trying to find locale:import again. [0.23 sec, 23.8 MB]
[debug] Starting bootstrap to none [0.23 sec, 23.91 MB]
[debug] Drush bootstrap phase 0 [0.23 sec, 23.91 MB]
[debug] Try to validate bootstrap phase 0 [0.24 sec, 23.91 MB]
[info] Executing: /home/pathtomywebsite/vendor/drush/drush/drush batch-process 15 --uri=default --root=/home/pathtomywebsite/web [0.27 sec, 25.74 MB]
>
>
> Command batch-process was not found. Drush was unable to query the database
> . As a result, many commands are unavailable. Re-run your command with --de
> bug to see relevant log messages.
>
>
In ProcessBase.php line 155:
[InvalidArgumentException]
Output is empty.
Exception trace:
at /home/pathtomywebsite/vendor/consolidation/site-process/src/ProcessBase.php:155
Consolidation\SiteProcess\ProcessBase->getOutputAsJson() at /home/pathtomywebsite/vendor/drush/drush/includes/batch.inc:157
_drush_backend_batch_process() at /home/pathtomywebsite/vendor/drush/drush/includes/batch.inc:80
drush_backend_batch_process() at /home/pathtomywebsite/vendor/drush/drush/src/Drupal/Commands/core/LocaleCommands.php:268
Drush\Drupal\Commands\core\LocaleCommands->import() at n/a:n/a
call_user_func_array() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/CommandProcessor.php:257
Consolidation\AnnotatedCommand\CommandProcessor->runCommandCallback() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/CommandProcessor.php:212
Consolidation\AnnotatedCommand\CommandProcessor->validateRunAndAlter() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/CommandProcessor.php:176
Consolidation\AnnotatedCommand\CommandProcessor->process() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/AnnotatedCommand.php:311
Consolidation\AnnotatedCommand\AnnotatedCommand->execute() at /home/pathtomywebsite/vendor/symfony/console/Command/Command.php:255
Symfony\Component\Console\Command\Command->run() at /home/pathtomywebsite/vendor/symfony/console/Application.php:1027
Symfony\Component\Console\Application->doRunCommand() at /home/pathtomywebsite/vendor/symfony/console/Application.php:273
Symfony\Component\Console\Application->doRun() at /home/pathtomywebsite/vendor/symfony/console/Application.php:149
Symfony\Component\Console\Application->run() at /home/pathtomywebsite/vendor/drush/drush/src/Runtime/Runtime.php:118
Drush\Runtime\Runtime->doRun() at /home/pathtomywebsite/vendor/drush/drush/src/Runtime/Runtime.php:48
Drush\Runtime\Runtime->run() at /home/pathtomywebsite/vendor/drush/drush/drush.php:72
require() at /home/pathtomywebsite/vendor/drush/drush/drush:4
Drush status:
php74 vendor/bin/drush status
Drupal version : 9.2.0
Site URI : http://default
DB driver : mysql
DB hostname : localhost
DB port : 3306
DB username : ****
DB name : ****
Database : Connected
Drupal bootstrap : Successful
Default theme : ttp
Admin theme : seven
PHP binary : /usr/local/php7.4/bin/php
PHP config : /usr/local/php7.4/php.ini
PHP OS : Linux
Drush script : /home/pathtomywebsite/vendor/drush/drush/drush
Drush version : 10.5.0
Drush temp : /tmp
Drush configs : /home/pathtomywebsite/vendor/drush/drush/drush.yml
Install profile : standard
Drupal root : /home/pathtomywebsite/web
Site path : sites/default
Files, Public : sites/default/files
Files, Temp : /tmp
sql:connection returns working string for mysql. Im stuck, maybe someone had similar problem?
Debug output from updatedb command:
php74 vendor/bin/drush updatedb
In Process.php line 266:
The command "/home/pathtomywebsite/vendor/drush/drush/drush updatedb:status --no-entity-updates --uri=default --root=/home/pathtomywebsite/web" failed.
Exit Code: 1(General error)
Working directory:
Output:
================
Error Output:
================
In BootstrapHook.php line 32:
Bootstrap failed. Run your command with -vvv for more information.
user#server:~/somepath$ php74 vendor/bin/drush updatedb -vvv
[preflight] Config paths: /home/pathtomywebsite/vendor/drush/drush/drush.yml
[preflight] Alias paths: /home/pathtomywebsite/web/drush/sites,/home/pathtomywebsite/drush/sites
[preflight] Commandfile search paths: /home/pathtomywebsite/vendor/drush/drush/src
[debug] Starting bootstrap to full [0.06 sec, 8.78 MB]
[debug] Drush bootstrap phase 5 [0.06 sec, 8.78 MB]
[debug] Try to validate bootstrap phase 5 [0.06 sec, 8.78 MB]
[debug] Try to validate bootstrap phase 5 [0.06 sec, 8.78 MB]
[debug] Try to bootstrap at phase 5 [0.06 sec, 8.78 MB]
[debug] Drush bootstrap phase: bootstrapDrupalRoot() [0.06 sec, 8.78 MB]
[debug] Change working directory to /home/pathtomywebsite/web [0.06 sec, 8.78 MB]
[debug] Initialized Drupal 9.2.0 root directory at /home/pathtomywebsite/web [0.06 sec, 8.78 MB]
[debug] Try to validate bootstrap phase 5 [0.06 sec, 8.78 MB]
[debug] Try to bootstrap at phase 5 [0.06 sec, 9.17 MB]
[debug] Drush bootstrap phase: bootstrapDrupalSite() [0.06 sec, 9.17 MB]
[debug] Initialized Drupal site default at sites/default [0.06 sec, 9.34 MB]
[debug] Try to validate bootstrap phase 5 [0.06 sec, 9.34 MB]
[debug] Try to bootstrap at phase 5 [0.06 sec, 9.34 MB]
[debug] Drush bootstrap phase: bootstrapDrupalConfiguration() [0.06 sec, 9.34 MB]
[debug] Add service modifier [0.07 sec, 9.55 MB]
[debug] Try to validate bootstrap phase 5 [0.07 sec, 9.55 MB]
[debug] Try to bootstrap at phase 5 [0.07 sec, 10.06 MB]
[debug] Drush bootstrap phase: bootstrapDrupalDatabase() [0.07 sec, 10.06 MB]
[debug] Successfully connected to the Drupal database. [0.07 sec, 10.06 MB]
[debug] Try to validate bootstrap phase 5 [0.07 sec, 10.06 MB]
[debug] Try to bootstrap at phase 5 [0.07 sec, 10.06 MB]
[debug] Drush bootstrap phase: bootstrapDrupalFull() [0.07 sec, 10.06 MB]
[debug] Start bootstrap of the Drupal Kernel. [0.07 sec, 10.06 MB]
[info] entity_reference_revisions should have an extra.drush.services section in its composer.json. See http://docs.drush.org/en/10.x/commands/#specifying-the-services-file. [0.1 sec, 12.42 MB]
[debug] Found drush.services.yml for token Drush commands [0.1 sec, 12.57 MB]
[info] twig_tweak should have an extra.drush.services section in its composer.json. See http://docs.drush.org/en/10.x/commands/#specifying-the-services-file. [0.1 sec, 12.57 MB]
[debug] Found drush.services.yml for pathauto Drush commands [0.1 sec, 12.57 MB]
[debug] Get container builder [0.1 sec, 12.59 MB]
[debug] Service modifier alter. [0.11 sec, 12.69 MB]
[debug] process drush.console.services console.command [0.17 sec, 17.37 MB]
[debug] Found tagged service twig_tweak.validate [0.17 sec, 17.37 MB]
[debug] Found tagged service twig_tweak.debug [0.17 sec, 17.37 MB]
[debug] process drush.command.services drush.command [0.17 sec, 17.37 MB]
[debug] Found tagged service config.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service config.export.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service config.import.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service batch.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service cli.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service drupal.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service deploy_hook.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service entity.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service image.commands [0.17 sec, 17.37 MB]
[debug] Found tagged service jsonapi.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service language.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service locale.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service messenger.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service migrate_runner.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service queue.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service role.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service state.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service twig.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service user.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service views.commands [0.17 sec, 17.38 MB]
[debug] Found tagged service watchdog.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service pm.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service theme.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service sanitize.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service sanitize.comments.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service sanitize.sessions.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service sanitize.userfields.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service sanitize.usertable.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service entity_reference_revisions.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service token.commands [0.17 sec, 17.39 MB]
[debug] Found tagged service pathauto.commands [0.17 sec, 17.39 MB]
[debug] process drush.command_info_alterer.services drush.command_info_alterer [0.17 sec, 17.39 MB]
[debug] process drush.generator.services drush.generator [0.17 sec, 17.39 MB]
[debug] Finished bootstrap of the Drupal Kernel. [0.3 sec, 26.24 MB]
[debug] Add a command: twig-tweak:validate [0.4 sec, 36.8 MB]
[debug] Add a command: twig-tweak:debug [0.4 sec, 36.8 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\config\ConfigCommands [0.42 sec, 38.48 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\config\ConfigExportCommands [0.42 sec, 38.52 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\config\ConfigImportCommands [0.42 sec, 38.52 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\BatchCommands [0.42 sec, 38.53 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\CliCommands [0.42 sec, 38.54 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\DrupalCommands [0.42 sec, 38.54 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\DeployHookCommands [0.42 sec, 38.56 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\EntityCommands [0.42 sec, 38.56 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\ImageCommands [0.42 sec, 38.57 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\JsonapiCommands [0.42 sec, 38.58 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\LanguageCommands [0.42 sec, 38.59 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\LocaleCommands [0.42 sec, 38.6 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\MessengerCommands [0.42 sec, 38.62 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\MigrateRunnerCommands [0.42 sec, 38.62 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\QueueCommands [0.43 sec, 38.67 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\RoleCommands [0.43 sec, 38.68 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\StateCommands [0.43 sec, 38.71 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\TwigCommands [0.43 sec, 38.72 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\UserCommands [0.43 sec, 38.73 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\ViewsCommands [0.43 sec, 38.77 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\core\WatchdogCommands [0.43 sec, 38.8 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\pm\PmCommands [0.43 sec, 38.83 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\pm\ThemeCommands [0.43 sec, 38.84 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeCommands [0.43 sec, 38.85 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeCommentsCommands [0.43 sec, 38.85 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeSessionsCommands [0.43 sec, 38.85 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeUserFieldsCommands [0.43 sec, 38.86 MB]
[debug] Add a commandfile class: Drush\Drupal\Commands\sql\SanitizeUserTableCommands [0.43 sec, 38.86 MB]
[debug] Add a commandfile class: Drupal\entity_reference_revisions\Commands\EntityReferenceRevisionsCommands [0.43 sec, 38.87 MB]
[debug] Add a commandfile class: Drupal\token\Commands\TokenCommands [0.43 sec, 38.87 MB]
[debug] Add a commandfile class: Drupal\pathauto\Commands\PathautoCommands [0.43 sec, 38.87 MB]
[info] Executing: /home/pathtomywebsite/vendor/drush/drush/drush updatedb:status --no-entity-updates --uri=default --root=/home/pathtomywebsite/web [0.56 sec, 41.87 MB]
In Process.php line 266:
[Symfony\Component\Process\Exception\ProcessFailedException]
The command "/home/pathtomywebsite/vendor/drush/drush/drush updatedb:status --no-entity-updates --uri=default --root=/home/pathtomywebsite/web" failed.
Exit Code: 1(General error)
Working directory:
Output:
================
Error Output:
================
In BootstrapHook.php line 32:
Bootstrap failed. Run your command with -vvv for more information.
Exception trace:
at /home/pathtomywebsite/vendor/symfony/process/Process.php:266
Symfony\Component\Process\Process->mustRun() at /home/pathtomywebsite/vendor/drush/drush/src/Commands/core/UpdateDBCommands.php:67
Drush\Commands\core\UpdateDBCommands->updatedb() at n/a:n/a
call_user_func_array() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/CommandProcessor.php:257
Consolidation\AnnotatedCommand\CommandProcessor->runCommandCallback() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/CommandProcessor.php:212
Consolidation\AnnotatedCommand\CommandProcessor->validateRunAndAlter() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/CommandProcessor.php:176
Consolidation\AnnotatedCommand\CommandProcessor->process() at /home/pathtomywebsite/vendor/consolidation/annotated-command/src/AnnotatedCommand.php:311
Consolidation\AnnotatedCommand\AnnotatedCommand->execute() at /home/pathtomywebsite/vendor/symfony/console/Command/Command.php:255
Symfony\Component\Console\Command\Command->run() at /home/pathtomywebsite/vendor/symfony/console/Application.php:1027
Symfony\Component\Console\Application->doRunCommand() at /home/pathtomywebsite/vendor/symfony/console/Application.php:273
Symfony\Component\Console\Application->doRun() at /home/pathtomywebsite/vendor/symfony/console/Application.php:149
Symfony\Component\Console\Application->run() at /home/pathtomywebsite/vendor/drush/drush/src/Runtime/Runtime.php:118
Drush\Runtime\Runtime->doRun() at /home/pathtomywebsite/vendor/drush/drush/src/Runtime/Runtime.php:48
Drush\Runtime\Runtime->run() at /home/pathtomywebsite/vendor/drush/drush/drush.php:72
require() at /home/pathtomywebsite/vendor/drush/drush/drush:4

I had the same problem after updating the docker container with a newer PHP version.
I was able to track down the issue by comparing php.ini.
The difference was in variables_order setting.
Run the following command in the terminal to check the values:
php -i | grep variables_order
In my case the output was:
variables_order => GPCS => GPCS
After changing to EGPCS, the script successfully ran.
To update the value, set variables_order to "EGPCS" in your php.ini file.
You can get the path of the config file by running in the terminal:
php --ini
It will list all config files included. Use the one on the top with the label Loaded Configuration File, the config must be there. In my case it was:
Loaded Configuration File: /usr/local/etc/php/php.ini
Find variables_order and make sure your configuration goes as:
variables_order = "EGPCS"

In my case the problem is with php version on my hosting. To run with 7.4 I have to use php74 cli:
**php74** vendor/bin/drush updatedb
Drush will not work with custom php binary name. Even after I modified this path in drush classess, got still same error. It's known bug posted few times on drush github issues.

Related

Fluentbit stop processing after while

To push the messages to the Elassandra server, we are utilising the Fluenbit version listed below.
"fluent/fluent-bit" imageName
"1.6.10-debug" imageTag
As long as there are fresh messages in the log file, the system will push them to the Elassandra server, but eventually it will pause and stop processing.
The only workaround we have is to kill the Fluentbit pod, but this only fixes the problem temporarily.
could you please let us know what could be the possible issue and provide some pointers to fix this?
{"index":{"_index":"wso2am-uat-requests-2022.10","_type":"requests","_id":"f1bcb7ad-8b81-d749-e17a-e460218a4f74"}}
{"#timestamp":"2022-10-14T06:53:19.434Z","event":{"date":"2022-10-14 06:53:18","meta_clientType":"{\"correlationID\":\"99de4331-720e-4dd9-959c-6584494fb546\",\"keyType\":\"PRODUCTION\"}","applicationConsumerKey":"rRIFXR2yscFcuZ8cIlWtO1_tYT0a","applicationName":"SRE-PrometheusExporter_FM-MyCEVA","applicationId":"1690","applicationOwner":"svc_fm_sre#service.logistics.corp","apiContext":"/fm/myceva/usermaintenance/1.2.0","apiName":"MyCevaUserMaintenanceAPI","apiVersion":"1.2.0","apiResourcePath":"/usermaintenance/1.2.0/clientSettings?email=harold.boutan#protonmail.com","apiResourceTemplate":"/clientSettings","apiMethod":"GET","apiCreator":"PalliyaliS","apiCreatorTenantDomain":"carbon.super","apiTier":"Bronze","apiHostname":"apim-intgw-uat-west.kus.logistics.corp","username":"svc_fm_sre#service.logistics.corp#carbon.super","userTenantDomain":"carbon.super","userIp":"10.40.69.107","userAgent":"Go-http-client/1.1","requestTimestamp":1665730398703,"throttledOut":false,"responseTime":172,"serviceTime":27,"backendTime":145,"responseCacheHit":false,"responseSize":0,"protocol":"http--1","responseCode":200,"destination":"https://myceva-user-maintenance-stage.kus.logistics.corp/userMaintenance/v1","securityLatency":25,"throttlingLatency":0,"requestMedLat":0,"responseMedLat":0,"backendLatency":145,"otherLatency":0,"gatewayType":"SYNAPSE","label":"Synapse","properties":"{}"}}
{"index":{"_index":"wso2am-uat-requests-2022.10","_type":"requests","_id":"0f609734-607e-6c59-be20-f4918136dbe3"}}
{"#timestamp":"2022-10-14T06:53:19.434Z","event":{"date":"2022-10-14 06:53:18","meta_clientType":"{\"correlationID\":\"90841c12-b0af-466c-bacd-19a689f7cbda\",\"keyType\":\"PRODUCTION\"}","applicationConsumerKey":"qHAj5zfrF6CPfLWWlKb5u07oDz4a","applicationName":"FM-CBD-Backoffice","applicationId":"443","applicationOwner":"svc_backoffice#service.logistics.corp","apiContext":"/fm/myceva/backoffice/1.7.0","apiName":"MyCevaBackofficeAPI","apiVersion":"1.7.0","apiResourcePath":"/backoffice/1.7.0/getClientSettings?client_id=&gluu_user_principal=harold.boutan#protonmail.com","apiResourceTemplate":"/getClientSettings","apiMethod":"GET","apiCreator":"PalliyaliS","apiCreatorTenantDomain":"carbon.super","apiTier":"Bronze","apiHostname":"apim-intgw-uat-west.kus.logistics.corp","username":"svc_backoffice#service.logistics.corp#carbon.super","userTenantDomain":"carbon.super","userIp":"10.40.69.105","userAgent":"Java/11.0.16","requestTimestamp":1665730398814,"throttledOut":false,"responseTime":54,"serviceTime":28,"backendTime":26,"responseCacheHit":false,"responseSize":0,"protocol":"http--1","responseCode":200,"destination":"https://backoffice-stage.kus.logistics.corp/backoffice","securityLatency":27,"throttlingLatency":0,"requestMedLat":0,"responseMedLat":0,"backendLatency":26,"otherLatency":0,"gatewayType":"SYNAPSE","label":"Synapse","properties":"{}"}}
[2022/10/14 06:53:20] [debug] [upstream] KA connection #86 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 is now available
[2022/10/14 06:53:20] [debug] [task] destroy task=0x7fc962c576c0 (task_id=5)
[2022/10/14 06:53:24] [debug] [upstream] KA connection #83 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 is now available
[2022/10/14 06:53:24] [debug] [task] destroy task=0x7fc962c57300 (task_id=0)
[2022/10/14 06:53:24] [debug] [upstream] KA connection #81 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 is now available
[2022/10/14 06:53:24] [debug] [task] destroy task=0x7fc962c57c60 (task_id=8)
[2022/10/14 06:53:25] [debug] [upstream] KA connection #87 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 is now available
[2022/10/14 06:53:25] [debug] [task] destroy task=0x7fc962c57940 (task_id=6)
[2022/10/14 06:53:25] [debug] [upstream] KA connection #82 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 is now available
[2022/10/14 06:53:25] [debug] [task] destroy task=0x7fc962c57440 (task_id=1)
[2022/10/14 06:53:44] [debug] [input:tail:tail.0] scanning path /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:53:44] [debug] [input:tail:tail.0] cannot read info from: /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:53:44] [debug] [input:tail:tail.0] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/alert.log'
[2022/10/14 06:53:44] [debug] [input:tail:tail.1] scanning path /home/fluent/wso2am-analytics-3.2.0/events/throttled.log
[2022/10/14 06:53:44] [debug] [input:tail:tail.1] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/throttled.log, inode 1099532988751
[2022/10/14 06:53:44] [debug] [input:tail:tail.1] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/throttled.log'
[2022/10/14 06:53:44] [debug] [input:tail:tail.2] scanning path /home/fluent/wso2am-analytics-3.2.0/events/faults.log
[2022/10/14 06:53:44] [debug] [input:tail:tail.2] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/faults.log, inode 1099532988752
[2022/10/14 06:53:44] [debug] [input:tail:tail.2] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/faults.log'
[2022/10/14 06:53:44] [debug] [input:tail:tail.3] scanning path /home/fluent/wso2am-analytics-3.2.0/events/requests.log
[2022/10/14 06:53:44] [debug] [input:tail:tail.3] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/requests.log, inode 1099532988750
[2022/10/14 06:53:44] [debug] [input:tail:tail.3] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/requests.log'
[2022/10/14 06:53:48] [debug] [upstream] drop keepalive connection #85 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:48] [debug] [upstream] drop keepalive connection #80 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:48] [debug] [upstream] drop keepalive connection #84 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:48] [debug] [upstream] KA connection #85 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:48] [debug] [upstream] KA connection #80 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:48] [debug] [upstream] KA connection #84 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:50] [debug] [upstream] drop keepalive connection #88 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:50] [debug] [upstream] drop keepalive connection #86 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:50] [debug] [upstream] KA connection #88 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:50] [debug] [upstream] KA connection #86 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:54] [debug] [upstream] drop keepalive connection #83 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:54] [debug] [upstream] drop keepalive connection #81 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:54] [debug] [upstream] KA connection #83 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:54] [debug] [upstream] KA connection #81 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:56] [debug] [upstream] drop keepalive connection #87 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:56] [debug] [upstream] drop keepalive connection #82 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 (keepalive idle timeout)
[2022/10/14 06:53:56] [debug] [upstream] KA connection #87 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:53:56] [debug] [upstream] KA connection #82 to usdcfed-elassandra-kubelb.kus.logistics.corp:9200 has been disconnected by the remote service
[2022/10/14 06:54:44] [debug] [input:tail:tail.0] scanning path /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:54:44] [debug] [input:tail:tail.0] cannot read info from: /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:54:44] [debug] [input:tail:tail.0] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/alert.log'
[2022/10/14 06:54:44] [debug] [input:tail:tail.3] scanning path /home/fluent/wso2am-analytics-3.2.0/events/requests.log
[2022/10/14 06:54:44] [debug] [input:tail:tail.3] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/requests.log, inode 1099532988750
[2022/10/14 06:54:44] [debug] [input:tail:tail.3] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/requests.log'
[2022/10/14 06:54:44] [debug] [input:tail:tail.1] scanning path /home/fluent/wso2am-analytics-3.2.0/events/throttled.log
[2022/10/14 06:54:44] [debug] [input:tail:tail.1] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/throttled.log, inode 1099532988751
[2022/10/14 06:54:44] [debug] [input:tail:tail.1] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/throttled.log'
[2022/10/14 06:54:44] [debug] [input:tail:tail.2] scanning path /home/fluent/wso2am-analytics-3.2.0/events/faults.log
[2022/10/14 06:54:44] [debug] [input:tail:tail.2] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/faults.log, inode 1099532988752
[2022/10/14 06:54:44] [debug] [input:tail:tail.2] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/faults.log'
[2022/10/14 06:55:44] [debug] [input:tail:tail.1] scanning path /home/fluent/wso2am-analytics-3.2.0/events/throttled.log
[2022/10/14 06:55:44] [debug] [input:tail:tail.1] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/throttled.log, inode 1099532988751
[2022/10/14 06:55:44] [debug] [input:tail:tail.1] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/throttled.log'
[2022/10/14 06:55:44] [debug] [input:tail:tail.0] scanning path /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:55:44] [debug] [input:tail:tail.0] cannot read info from: /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:55:44] [debug] [input:tail:tail.0] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/alert.log'
[2022/10/14 06:55:44] [debug] [input:tail:tail.3] scanning path /home/fluent/wso2am-analytics-3.2.0/events/requests.log
[2022/10/14 06:55:44] [debug] [input:tail:tail.3] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/requests.log, inode 1099532988750
[2022/10/14 06:55:44] [debug] [input:tail:tail.3] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/requests.log'
[2022/10/14 06:55:44] [debug] [input:tail:tail.2] scanning path /home/fluent/wso2am-analytics-3.2.0/events/faults.log
[2022/10/14 06:55:44] [debug] [input:tail:tail.2] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/faults.log, inode 1099532988752
[2022/10/14 06:55:44] [debug] [input:tail:tail.2] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/faults.log'
[2022/10/14 06:56:44] [debug] [input:tail:tail.1] scanning path /home/fluent/wso2am-analytics-3.2.0/events/throttled.log
[2022/10/14 06:56:44] [debug] [input:tail:tail.1] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/throttled.log, inode 1099532988751
[2022/10/14 06:56:44] [debug] [input:tail:tail.1] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/throttled.log'
[2022/10/14 06:56:44] [debug] [input:tail:tail.2] scanning path /home/fluent/wso2am-analytics-3.2.0/events/faults.log
[2022/10/14 06:56:44] [debug] [input:tail:tail.2] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/faults.log, inode 1099532988752
[2022/10/14 06:56:44] [debug] [input:tail:tail.2] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/faults.log'
[2022/10/14 06:56:44] [debug] [input:tail:tail.0] scanning path /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:56:44] [debug] [input:tail:tail.0] cannot read info from: /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:56:44] [debug] [input:tail:tail.0] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/alert.log'
[2022/10/14 06:56:44] [debug] [input:tail:tail.3] scanning path /home/fluent/wso2am-analytics-3.2.0/events/requests.log
[2022/10/14 06:56:44] [debug] [input:tail:tail.3] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/requests.log, inode 1099532988750
[2022/10/14 06:56:44] [debug] [input:tail:tail.3] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/requests.log'
[2022/10/14 06:57:44] [debug] [input:tail:tail.2] scanning path /home/fluent/wso2am-analytics-3.2.0/events/faults.log
[2022/10/14 06:57:44] [debug] [input:tail:tail.2] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/faults.log, inode 1099532988752
[2022/10/14 06:57:44] [debug] [input:tail:tail.2] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/faults.log'
[2022/10/14 06:57:44] [debug] [input:tail:tail.1] scanning path /home/fluent/wso2am-analytics-3.2.0/events/throttled.log
[2022/10/14 06:57:44] [debug] [input:tail:tail.1] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/throttled.log, inode 1099532988751
[2022/10/14 06:57:44] [debug] [input:tail:tail.1] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/throttled.log'
[2022/10/14 06:57:44] [debug] [input:tail:tail.0] scanning path /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:57:44] [debug] [input:tail:tail.0] cannot read info from: /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:57:44] [debug] [input:tail:tail.0] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/alert.log'
[2022/10/14 06:57:44] [debug] [input:tail:tail.3] scanning path /home/fluent/wso2am-analytics-3.2.0/events/requests.log
[2022/10/14 06:57:44] [debug] [input:tail:tail.3] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/requests.log, inode 1099532988750
[2022/10/14 06:57:44] [debug] [input:tail:tail.3] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/requests.log'
[2022/10/14 06:58:44] [debug] [input:tail:tail.2] scanning path /home/fluent/wso2am-analytics-3.2.0/events/faults.log
[2022/10/14 06:58:44] [debug] [input:tail:tail.2] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/faults.log, inode 1099532988752
[2022/10/14 06:58:44] [debug] [input:tail:tail.2] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/faults.log'
[2022/10/14 06:58:44] [debug] [input:tail:tail.1] scanning path /home/fluent/wso2am-analytics-3.2.0/events/throttled.log
[2022/10/14 06:58:44] [debug] [input:tail:tail.1] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/throttled.log, inode 1099532988751
[2022/10/14 06:58:44] [debug] [input:tail:tail.1] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/throttled.log'
[2022/10/14 06:58:44] [debug] [input:tail:tail.0] scanning path /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:58:44] [debug] [input:tail:tail.0] cannot read info from: /home/fluent/wso2am-analytics-3.2.0/events/alert.log
[2022/10/14 06:58:44] [debug] [input:tail:tail.0] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/alert.log'
[2022/10/14 06:58:44] [debug] [input:tail:tail.3] scanning path /home/fluent/wso2am-analytics-3.2.0/events/requests.log
[2022/10/14 06:58:44] [debug] [input:tail:tail.3] scan_blog add(): dismissed: /home/fluent/wso2am-analytics-3.2.0/events/requests.log, inode 1099532988750
[2022/10/14 06:58:44] [debug] [input:tail:tail.3] 0 new files found on path '/home/fluent/wso2am-analytics-3.2.0/events/requests.log'
(⎈ |usdc-east-wso2am-uat:wso2am-uat)➜ apictl
That's an old version. Do you see the same behavior with a newer version (1.9 or building from the master branch)?

Sparklyr connection to YARN cluster fails

I am trying to connect to a Spark cluster using sparklyr on yarn-client mode.
On local mode (master = "local") my spark setup works, but when I try to connect to the Cluster, I get the following error
Error in force(code) :
Failed during initialize_connection: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
(see full error log below)
The setup is as follows. The spark cluster (hosted on AWS), setup with Ambari, runs on yarn 3.1.1, spark 2.3.2, hdfs 3.1.1, and some other services and works with other platforms (i.e., non R/Python applications setup with Ambari. Note that a setup using Ambari is not possible, as the R machine runs on Ubuntu, and the Spark cluster on CentOS 7).
On my R machine I use the following code. Note that I have installed java 8-openjdk and the correct spark version.
Inside of my YARN_CONF_DIR I have created the yarn-site.xml file, as exported from Ambari (Services -> Download All Client Configs). I have also tried to copy the files hdfs-site.xml and hive-site.xml with the same result.
library(sparklyr)
library(DBI)
# spark_install("2.3.2")
spark_installed_versions()
#> spark hadoop dir
#> 1 2.3.2 2.7 /home/david/spark/spark-2.3.2-bin-hadoop2.7
# use java 8 instead of java 11 (not supported with Spark 2.3.2 only 3.0.0+)
Sys.setenv(JAVA_HOME = "/usr/lib/jvm/java-8-openjdk-amd64/")
Sys.setenv(SPARK_HOME = "/home/david/spark/spark-2.3.2-bin-hadoop2.7/")
Sys.setenv(YARN_CONF_DIR = "/home/david/Spark-test/yarn-conf")
conf <- spark_config()
conf$spark.executor.memory <- "500M"
conf$spark.executor.cores <- 2
conf$spark.executor.instances <- 1
conf$spark.dynamicAllocation.enabled <- "false"
sc <- spark_connect(master = "yarn-client", config = conf)
#> Error in force(code) :
#> Failed during initialize_connection: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
#> ...
I am not really sure how to debug this, on which machine the error originates, or how to fix it, thus any help and or hint is greatly appreciated!
Edit / Progress
So far I have found out, that the spark version installed by sparklyr (from here), depends on glassfish, whereas my cluster depends on an oracle java installation (hence the com/sun/... path).
This applies to the following java packages:
library(tidyverse)
library(glue)
ll <- list.files("~/spark/spark-2.3.2-bin-hadoop2.7/jars/", pattern = "^jersey", full.names = TRUE)
df <- map_dfr(ll, function(f) {
x <- system(glue("jar tvf {f}"), intern = TRUE)
tibble(file = f, class = str_extract(x, "[^ ]+$"))
})
df %>%
filter(str_detect(class, "com/sun")) %>%
count(file)
#> # A tibble: 4 x 2
#> file n
#> <chr> <int>
#> 1 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//activation-1.1.1.jar 15
#> 2 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//derby.log 1194
#> 3 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//jersey-client-1.19.jar 108
#> 4 /home/david/spark/spark-2.3.2-bin-hadoop2.7/jars//jersey-server-2.22.2.jar 22
I have tried to load the latest jar files from maven (e.g., from this) for the files jersey-client.jar and jersey-core.jar and now the connection takes ages and does not finish (at least not the same error anymore, Yay I guess...). Any idea what the cause of this issue is?
Full Error log
Error in force(code) :
Failed during initialize_connection: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
at org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:55)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createTimelineClient(YarnClientImpl.java:181)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:168)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:151)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:147)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:136)
at sparklyr.StreamHandler.read(stream.scala:61)
at sparklyr.BackendHandler$$anonfun$channelRead0$1.apply$mcV$sp(handler.scala:58)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 49 more
Log: /tmp/RtmpIKnflg/filee462cec58ee_spark.log
---- Output Log ----
20/07/16 10:20:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/07/16 10:20:42 INFO sparklyr: Session (3779) is starting under 127.0.0.1 port 8880
20/07/16 10:20:42 INFO sparklyr: Session (3779) found port 8880 is not available
20/07/16 10:20:42 INFO sparklyr: Backend (3779) found port 8884 is available
20/07/16 10:20:42 INFO sparklyr: Backend (3779) is registering session in gateway
20/07/16 10:20:42 INFO sparklyr: Backend (3779) is waiting for registration in gateway
20/07/16 10:20:42 INFO sparklyr: Backend (3779) finished registration in gateway with status 0
20/07/16 10:20:42 INFO sparklyr: Backend (3779) is waiting for sparklyr client to connect to port 8884
20/07/16 10:20:43 INFO sparklyr: Backend (3779) accepted connection
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is waiting for sparklyr client to connect to port 8884
20/07/16 10:20:43 INFO sparklyr: Backend (3779) received command 0
20/07/16 10:20:43 INFO sparklyr: Backend (3779) found requested session matches current session
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is creating backend and allocating system resources
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is using port 8885 for backend channel
20/07/16 10:20:43 INFO sparklyr: Backend (3779) created the backend
20/07/16 10:20:43 INFO sparklyr: Backend (3779) is waiting for r process to end
20/07/16 10:20:43 INFO SparkContext: Running Spark version 2.3.2
20/07/16 10:20:43 WARN SparkConf: spark.master yarn-client is deprecated in Spark 2.0+, please instead use "yarn" with specified deploy mode.
20/07/16 10:20:43 INFO SparkContext: Submitted application: sparklyr
20/07/16 10:20:43 INFO SecurityManager: Changing view acls to: ubuntu
20/07/16 10:20:43 INFO SecurityManager: Changing modify acls to: ubuntu
20/07/16 10:20:43 INFO SecurityManager: Changing view acls groups to:
20/07/16 10:20:43 INFO SecurityManager: Changing modify acls groups to:
20/07/16 10:20:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); groups with view permissions: Set(); users with modify permissions: Set(ubuntu); groups with modify permissions: Set()
20/07/16 10:20:43 INFO Utils: Successfully started service 'sparkDriver' on port 42419.
20/07/16 10:20:43 INFO SparkEnv: Registering MapOutputTracker
20/07/16 10:20:43 INFO SparkEnv: Registering BlockManagerMaster
20/07/16 10:20:43 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/07/16 10:20:43 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/07/16 10:20:43 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-583db378-821a-4990-bfd2-5fcaf95d071b
20/07/16 10:20:44 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/07/16 10:20:44 INFO SparkEnv: Registering OutputCommitCoordinator
20/07/16 10:20:44 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
20/07/16 10:20:44 INFO Utils: Successfully started service 'SparkUI' on port 4041.
20/07/16 10:20:44 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{SPARK IP}
Then in the /tmp/RtmpIKnflg/filee462cec58ee_spark.log file
20/07/16 10:09:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/07/16 10:09:07 INFO sparklyr: Session (11296) is starting under 127.0.0.1 port 8880
20/07/16 10:09:07 INFO sparklyr: Session (11296) found port 8880 is not available
20/07/16 10:09:07 INFO sparklyr: Backend (11296) found port 8882 is available
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is registering session in gateway
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for registration in gateway
20/07/16 10:09:07 INFO sparklyr: Backend (11296) finished registration in gateway with status 0
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for sparklyr client to connect to port 8882
20/07/16 10:09:07 INFO sparklyr: Backend (11296) accepted connection
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for sparklyr client to connect to port 8882
20/07/16 10:09:07 INFO sparklyr: Backend (11296) received command 0
20/07/16 10:09:07 INFO sparklyr: Backend (11296) found requested session matches current session
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is creating backend and allocating system resources
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is using port 8883 for backend channel
20/07/16 10:09:07 INFO sparklyr: Backend (11296) created the backend
20/07/16 10:09:07 INFO sparklyr: Backend (11296) is waiting for r process to end
20/07/16 10:09:08 INFO SparkContext: Running Spark version 2.3.2
20/07/16 10:09:08 WARN SparkConf: spark.master yarn-client is deprecated in Spark 2.0+, please instead use "yarn" with specified deploy mode.
20/07/16 10:09:08 INFO SparkContext: Submitted application: sparklyr
20/07/16 10:09:08 INFO SecurityManager: Changing view acls to: david
20/07/16 10:09:08 INFO SecurityManager: Changing modify acls to: david
20/07/16 10:09:08 INFO SecurityManager: Changing view acls groups to:
20/07/16 10:09:08 INFO SecurityManager: Changing modify acls groups to:
20/07/16 10:09:08 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(david); groups with view permissions: Set(); users with modify permissions: Set(david); groups with modify permissions: Set()
20/07/16 10:09:08 INFO Utils: Successfully started service 'sparkDriver' on port 44541.
20/07/16 10:09:08 INFO SparkEnv: Registering MapOutputTracker
20/07/16 10:09:08 INFO SparkEnv: Registering BlockManagerMaster
20/07/16 10:09:08 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/07/16 10:09:08 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/07/16 10:09:08 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-d7b67ab2-508c-4488-ac1b-7ee0e787aa79
20/07/16 10:09:08 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/07/16 10:09:08 INFO SparkEnv: Registering OutputCommitCoordinator
20/07/16 10:09:08 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/07/16 10:09:08 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{THE INTERNAL SPARK IP}:4040
20/07/16 10:09:08 INFO SparkContext: Added JAR file:/home/david/R/x86_64-pc-linux-gnu-library/4.0/sparklyr/java/sparklyr-2.3-2.11.jar at spark://{THE INTERNAL SPARK IP}:44541/jars/sparklyr-2.3-2.11.jar with timestamp 1594894148685
20/07/16 10:09:09 ERROR sparklyr: Backend (11296) failed calling getOrCreate on 11: java.lang.NoClassDefFoundError: com/sun/jersey/api/client/config/ClientConfig
at org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:55)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createTimelineClient(YarnClientImpl.java:181)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:168)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:151)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sparklyr.Invoke.invoke(invoke.scala:147)
at sparklyr.StreamHandler.handleMethodCall(stream.scala:136)
at sparklyr.StreamHandler.read(stream.scala:61)
at sparklyr.BackendHandler$$anonfun$channelRead0$1.apply$mcV$sp(handler.scala:58)
at scala.util.control.Breaks.breakable(Breaks.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:38)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 49 more

nginx proxy_pass does not take affect

I want to deploy an flask app and followed a tutorial to get this done using nginx.
As the tutorial states I do as follows:
sudo nano /etc/nginx/sites-available/app
this file contains:
server {
listen 80;
server_name server_domain_or_IP;
location / {
include proxy_params;
proxy_pass http://unix:/home/pi/Desktop/python_scripts/internetdisplay/app.sock;
}
}
A systemd Unit service was created and is succesfully running. This created the app.sock file in the 'internetdisplay' directory. Systemctl status app.service results:
● app.service - Gunicorn instance to serve myproject
Loaded: loaded (/etc/systemd/system/app.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-11-10 21:16:49 CET; 16h ago
Main PID: 438 (gunicorn)
Tasks: 4 (limit: 2200)
Memory: 46.4M
CGroup: /system.slice/app.service
├─438 /usr/bin/python2 /usr/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 wsgi:app
├─679 /usr/bin/python2 /usr/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 wsgi:app
├─681 /usr/bin/python2 /usr/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 wsgi:app
└─682 /usr/bin/python2 /usr/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 wsgi:app
Nov 10 21:16:49 raspberrypi systemd[1]: Started Gunicorn instance to serve myproject.
Nov 10 21:16:57 raspberrypi gunicorn[438]: [2019-11-10 21:16:57 +0000] [438] [INFO] Starting gunicorn 19.9.0
Nov 10 21:16:57 raspberrypi gunicorn[438]: [2019-11-10 21:16:57 +0000] [438] [INFO] Listening at: unix:app.sock (438)
Nov 10 21:16:57 raspberrypi gunicorn[438]: [2019-11-10 21:16:57 +0000] [438] [INFO] Using worker: sync
Nov 10 21:16:57 raspberrypi gunicorn[438]: [2019-11-10 21:16:57 +0000] [679] [INFO] Booting worker with pid: 679
Nov 10 21:16:57 raspberrypi gunicorn[438]: [2019-11-10 21:16:57 +0000] [681] [INFO] Booting worker with pid: 681
Nov 10 21:16:57 raspberrypi gunicorn[438]: [2019-11-10 21:16:57 +0000] [682] [INFO] Booting worker with pid: 682
Then I link to sites-enabled and restart nginx:
sudo ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled
sudo systemctl restart nginx
But surfing to http://localhost leads to an "this site can't be reached" error
It sounds like your location block is not set up correctly to find your resourses.
I assume that this is not the location of your unix socket:
/home/tasnuva/work/deployment/src/app.sock
Check the following:
systemd unit file is creating a socket in the expected location
the daemon is indeed running and the socket file exists
your nginx config is pointing to the correct socket file.
If none of this tells you anything, please update your question with appropriate error log entries.

Serving API via Flask / Gunicorn / Nginx: Connection refused

I'm having trouble getting gunicorn and Nginx to work together and allow me to offer a simple API via flask:
Locally, running gunicorn and getting responses from the server works fine:
gunicorn wsgi:app (start server)
[2019-06-11 23:12:48 +0000] [14615] [INFO] Starting gunicorn 19.9.0
[2019-06-11 23:12:48 +0000] [14615] [INFO] Listening at: http://127.0.0.1:8000 (14615)
[2019-06-11 23:12:48 +0000] [14615] [INFO] Using worker: sync
[2019-06-11 23:12:48 +0000] [14619] [INFO] Booting worker with pid: 14619
curl http://127.0.0.1:8000/predict (client call server for prediction)
output: "SERVER WORKS"
The problem arises when I try to use Nginx as well.
/etc/systemd/system/app.service
[Unit]
Description=Gunicorn instance to serve app
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/root/server
ExecStart=/usr/local/bin/gunicorn --bind unix:app.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
/etc/nginx/sites-available/app
server {
listen 80;
server_name [SERVER_IP_ADDRESS];
location / {
include proxy_params;
proxy_pass http://unix:/root/server/app.sock;
}
}
The status of my systemd looks fine:
systemctl status app
● app.service - Gunicorn instance to serve app
Loaded: loaded (/etc/systemd/system/app.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-06-11 23:24:07 UTC; 1s ago
Main PID: 14664 (gunicorn)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/app.service
├─14664 /usr/bin/python /usr/local/bin/gunicorn --bind unix:app.sock -m 007 wsgi:app
└─14681 /usr/bin/python /usr/local/bin/gunicorn --bind unix:app.sock -m 007 wsgi:app
systemd[1]: Started Gunicorn instance to serve app.
gunicorn[14664]: [2019-06-11 23:24:07 +0000] [14664] [INFO] Starting gunicorn 19.9.0
gunicorn[14664]: [2019-06-11 23:24:07 +0000] [14664] [INFO] Listening at: unix:app.sock (14664)
gunicorn[14664]: [2019-06-11 23:24:07 +0000] [14664] [INFO] Using worker: sync
gunicorn[14664]: [2019-06-11 23:24:07 +0000] [14681] [INFO] Booting worker with pid: 14681
When I make a request to the server, I have trouble connecting:
curl http://[SERVER_IP_ADDRESS]:80/predict
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
EDIT:
I tried removing server_name [SERVER_IP_ADDRESS]; from /etc/nginx/sites-available/app. I now receive 'Welcome to nginx!' at http://SERVER_IP_ADDRESS, and '404 Not Found' at http://SERVER_IP_ADDRESS/predict
FYI, my flask app only has one route, which is '/predict'
It looks like you don't have Port 80 open, so here's a quick iptables command to do so:
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT

Airflow webserver not starting

I have installed airflow via github source. I have configured airflow with mysql metadb with local executer. When I tried to start my webserver it couldn't able to start.
install.sh
mkdir -p ~/home
export AIRFLOW_HOME=~/airflow
cd $AIRFLOW_HOME
virtualenv env
source env/bin/activate
mkdir -p /usr/local/src/
cd /usr/local/src/
git clone https://github.com/apache/incubator-airflow.git
cd incubator-airflow
git checkout tags/1.8.2
pip install -e .
pip install -e .[hive]
pip install -e .[gcp_api]
pip install -e .[mysql]
pip install -e .[password]
pip install -e .[celery]
airflow.cfg:
[core]
# The home folder for airflow, default is ~/airflow
airflow_home = /root/airflow
dags_folder = /root/airflow/dags
base_log_folder = /root/airflow/logs
encrypt_s3_logs = False
executor = LocalExecutor
sql_alchemy_conn = mysql://root:*****#localhost/airflow
when I tried to start my webserver using it shows ttou signal handling and existing worker.
airflow webserver -p 8080
[2017-11-20 04:05:30,642] {__init__.py:57} INFO - Using executor LocalExecutor
[2017-11-20 04:05:30,723] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2017-11-20 04:05:30,756] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:05:31,437] [3079] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
Running the Gunicorn Server with:
Workers: 8 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
[2017-11-20 04:05:32,074] {__init__.py:57} INFO - Using executor LocalExecutor
[2017-11-20 04:05:32,153] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt
[2017-11-20 04:05:32,184] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt
[2017-11-20 04:05:32 +0000] [3087] [INFO] Starting gunicorn 19.3.0
[2017-11-20 04:05:32 +0000] [3087] [INFO] Listening at: http://0.0.0.0:8080 (3087)
[2017-11-20 04:05:32 +0000] [3087] [INFO] Using worker: sync
[2017-11-20 04:05:32 +0000] [3098] [INFO] Booting worker with pid: 3098
[2017-11-20 04:05:32 +0000] [3099] [INFO] Booting worker with pid: 3099
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:05:32 +0000] [3100] [INFO] Booting worker with pid: 3100
[2017-11-20 04:05:32 +0000] [3101] [INFO] Booting worker with pid: 3101
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:05:32 +0000] [3102] [INFO] Booting worker with pid: 3102
[2017-11-20 04:05:32 +0000] [3103] [INFO] Booting worker with pid: 3103
[2017-11-20 04:05:32 +0000] [3104] [INFO] Booting worker with pid: 3104
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:05:32 +0000] [3105] [INFO] Booting worker with pid: 3105
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:05:33,198] [3099] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:33,312] [3098] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:33,538] [3100] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:33,863] [3101] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:33,963] [3102] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:33,987] [3104] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:34,062] [3105] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:05:34,162] [3103] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:06:05 +0000] [3087] [INFO] Handling signal: ttin
[2017-11-20 04:06:05 +0000] [3121] [INFO] Booting worker with pid: 3121
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:06:05,426] [3121] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:06:06 +0000] [3087] [INFO] Handling signal: ttou
[2017-11-20 04:06:06 +0000] [3098] [INFO] Worker exiting (pid: 3098)
[2017-11-20 04:06:36 +0000] [3087] [INFO] Handling signal: ttin
[2017-11-20 04:06:36 +0000] [3136] [INFO] Booting worker with pid: 3136
/root/env/local/lib/python2.7/site-packages/flask/exthook.py:71: ExtDeprecationWarning: Importing flask.ext.cache is deprecated, use flask_cache instead.
.format(x=modname), ExtDeprecationWarning
[2017-11-20 04:06:36,818] [3136] {models.py:167} INFO - Filling up the DagBag from /root/airflow/dags
[2017-11-20 04:06:37 +0000] [3087] [INFO] Handling signal: ttou
[2017-11-20 04:06:37 +0000] [3099] [INFO] Worker exiting (pid: 3099)
[2017-11-20 04:07:07 +0000] [3087] [INFO] Handling signal: ttin
[2017-11-20 04:07:07 +0000] [3144] [INFO] Booting worker with pid: 3144
Your webserver is running fine. Workers are regularly "refreshed" at an interval set by worker_refresh_interval so that they pickup on new or updated DAGs. When this happens, you'll see the signal ttin (increase processes by one) always followed byttou (decrease processes by one), where a new worker is added before the oldest worker is removed.

Resources