Somehow there is a big problem regarding the varnish cache and how it works.
I have 2 servers that run the same Varnish configuration, and some vcl files and the main one gets this error on only one of the servers:
May 28 17:20:06 VCL compilation failed
May 28 17:02:55 server1 systemd[1]: varnish.service: control process exited, code=exited status=255
May 28 17:02:55 server1 systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
May 28 17:02:55 server1 systemd[1]: Unit varnish.service entered failed state.
May 28 17:02:55 server1 systemd[1]: varnish.service failed.
May 28 17:20:06 server1 systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
May 28 17:20:06 server1 varnishd[27860]: Could not delete 'vcl_boot.1653742206.480069/vgc.sym': No such file or directory
May 28 17:20:06 server1 varnishd[27860]: Error:
May 28 17:20:06 server1 varnishd[27860]: Message from VCC-compiler:
May 28 17:20:06 server1 varnishd[27860]: Expected CSTR got 'sub'
May 28 17:20:06 server1 varnishd[27860]: (program line 378), at
May 28 17:20:06 server1 varnishd[27860]: ('/etc/varnish/default.vcl' Line 27 Pos 1)
May 28 17:20:06 server1 varnishd[27860]: sub vcl_recv {
May 28 17:20:06 server1 varnishd[27860]: ###-----------
May 28 17:20:06 server1 varnishd[27860]: Running VCC-compiler failed, exited with 2
May 28 17:20:06 server1 varnishd[27860]: VCL compilation failed
May 28 17:20:06 server1 systemd[1]: varnish.service: control process exited, code=exited status=255
May 28 17:20:06 server1 systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
May 28 17:20:06 server1 systemd[1]: Unit varnish.service entered failed state.
May 28 17:20:06 server1 systemd[1]: varnish.service failed.
And the other one just works perfectly.
the errors above are just examples, the first one was regarding the xforward.vcl that is shorter to put here and it looks like this:
sub vcl_recv {
if (req.http.X-Forwarded-For) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
}
And here is the head content of my default.vcl:
sub vcl_recv {
# First we must fix the redirect loop that we might hit while using WordPress:
# For this you have to find a header in wp-config.php file of that domain
# for the configuration in the wp-config.php file I will put it in the
# README.md file
# [loop_error_fix]
set req.http.X-Forwarded-Proto = "https";
# [LetsEncrypt Certbot Passthrough {default2.vcl} ]
if (req.url ~ "^/\.well-known/acme-challenge/") {
return (pass);
}
.....
but just like the error above it gives Expected CSTR got 'sub'.
I was wondering is there something wrong with varnish!?
Varnish version:
varnishd -V
varnishd (varnish-6.6.2 revision 17c51b08e037fc8533fb3687a042a867235fc72f)
Copyright (c) 2006 Verdens Gang AS
Copyright (c) 2006-2020 Varnish Software
In any case here is the ERROR on the xforward.vcl:
May 28 17:20:06 server1 varnishd[27860]: VCL compilation failed
May 28 17:20:06 server1 systemd[1]: varnish.service: control process exited, code=exited status=255
May 28 17:20:06 server1 systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
May 28 17:20:06 server1 systemd[1]: Unit varnish.service entered failed state.
May 28 17:20:06 server1 systemd[1]: varnish.service failed.
May 28 17:42:26 server1 systemd[1]: Starting Varnish Cache, a high-performance HTTP accelerator...
May 28 17:42:26 server1 varnishd[5089]: Could not delete 'vcl_boot.1653743546.450277/vgc.sym': No such file or directory
May 28 17:42:26 server1 varnishd[5089]: Error:
May 28 17:42:26 server1 varnishd[5089]: Message from VCC-compiler:
May 28 17:42:26 server1 varnishd[5089]: Expected CSTR got 'sub'
May 28 17:42:26 server1 varnishd[5089]: (program line 378), at
May 28 17:42:26 server1 varnishd[5089]: ('/etc/varnish/lib/xforward.vcl' Line 9 Pos 1)
May 28 17:42:26 server1 varnishd[5089]: sub vcl_recv {
May 28 17:42:26 server1 varnishd[5089]: ###-----------
May 28 17:42:26 server1 varnishd[5089]: Running VCC-compiler failed, exited with 2
May 28 17:42:26 server1 varnishd[5089]: VCL compilation failed
May 28 17:42:26 server1 systemd[1]: varnish.service: control process exited, code=exited status=255
May 28 17:42:26 server1 systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
May 28 17:42:26 server1 systemd[1]: Unit varnish.service entered failed state.
May 28 17:42:26 server1 systemd[1]: varnish.service failed.
If there is something that I'm missing I'll be glad if you could help me out.
Thanks
Related
I am trying to troubleshoot my nginx and it is not going to well.
I start off by running sudo systemctl start nginx and I get
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.
and so I continue and try systemctl status nginx.service to get:
Dec 13 00:29:18 systemd[1]: nginx.service: Control process exited, code=exited status=1
Dec 13 00:29:18 systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Dec 13 00:29:18 systemd[1]: nginx.service: Unit entered failed state.
Dec 13 00:29:18 systemd[1]: nginx.service: Failed with result 'exit-code'
and so I continue to give it a test sudo nginx -t -c /etc/nginx/nginx.conf
resulting in
nginx: [emerg] unknown directive "passenger_enabled" in /etc/nginx/sites-enabled/nginx_aggrigator.conf:164
nginx: configuration file /etc/nginx/nginx.conf test failed
I am trying to run this on ubuntu 16.04 if that is any help
Like the error say..
unknown directive "passenger_enabled"
nginx_aggrigator.conf:164
So start by checking this file:
/etc/nginx/sites-enabled/nginx_aggrigator.conf
and look at line 164 and find the part where you declare passenger_enabled As this is where your error is.
I had setup my nginx server fine last week until I noticed I was receiving DOSS attacks against it. I then noticed at this point my Nginx server was failing to start. I have tried everything else and unsure what to do to resolve the issue apart from reading documentation which does not help.
Documentation on Nginx
main nginx.conf appears to be empty and I cannot save to it for some reason.
root#ubuntu-vpc-do-moon:~# /etc/init.d/nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-11-04 10:54:44 UTC; 1min 43s ago
Docs: man:nginx(8)
Process: 2550 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: [emerg] open() "/etc/nginx/sites-enabled/nginx.conf" failed (2: No such file or directory) in /etc/nginx/nginx.conf:62
Nov 04 10:54:44 ubuntu-vpc-do-moon nginx[2550]: nginx: configuration file /etc/nginx/nginx.conf test failed
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Control process exited, code=exited status=1
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: nginx.service: Failed with result 'exit-code'.
Nov 04 10:54:44 ubuntu-vpc-do-moon systemd[1]: Failed to start A high performance web server and a reverse proxy server.
Removed Nginx from ubuntu and done a clean installation onto server. Managed to sort the server blocks out this time so all good.
my rstudio server was hanging too often while loading starting r shiny app. So after googling around i tried to stop and start the rstudio server again. i also tried to kill all process running on 8787 port. But had no luck solving the issue. now r studio server keeps waiting while opening on web browser.
I have used below command to kill process running on 8787 port. after running the command there was no result.
sudo kill -TERM 20647
(20647 is port where rserver process is listening. i got this port number after running 'sudo netstat -ntlp | grep :8787' command).
to stop and restart r studio server, i used below command
sudo rstudio-server stop
sudo rstudio-server start
expected result is working sr studio server which doesnt hang while loading shiny app.
after running status command i found below error logged for rstudio server.
rstudio-server.service - RStudio Server
Loaded: loaded (/etc/systemd/system/rstudio-server.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2019-08-28 04:50:07 CDT; 11s ago
Process: 31611 ExecStop=/usr/bin/killall -TERM rserver (code=exited, status=0/SUCCESS)
Process: 31609 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=0/SUCCESS)
Main PID: 31610 (code=exited, status=1/FAILURE)
CGroup: /system.slice/rstudio-server.service
└─20647 /usr/lib/rstudio-server/bin/rserver
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service holdoff time over, scheduling r...rt.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Stopped RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: start request repeated too quickly for rstudio-server....ice
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Failed to start RStudio Server.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: Unit rstudio-server.service entered failed state.
Aug 28 04:50:07 nds-vm1.novalocal systemd[1]: rstudio-server.service failed.
As a last resort, i have restarted my VM where i am running r-studio server. It seems to have resolved my issue.
I am installing Apache cloudstack on ubuntu 16.0.4, but after installing cloudstack setup when I start services of cloudstack management service it displayed the following errors. (I have installed tomcat7, but tomcat 6 is not installed)
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units. Job for
cloudstack-management.service failed because the control process
exited with error code. See "systemctl status
cloudstack-management.service" and "journalctl -xe" for details.
I have checked systemctl status cloudstack-management.service command and it displays the following:
cloudstack-management.service - LSB: Start Tomcat (CloudStack).
Loaded: loaded (/etc/init.d/cloudstack-management; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-08-25 21:53:07 IST; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 26684 ExecStart=/etc/init.d/cloudstack-management start (code=exited, status=1/FAILURE)
Aug 25 21:53:07 dhaval-pc systemd[1]: Starting LSB: Start Tomcat (CloudStack)....
Aug 25 21:53:07 dhaval-pc cloudstack-management[26684]: * cloudstack-management is not installed
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Control process exited, code=exited status=1
Aug 25 21:53:07 dhaval-pc systemd[1]: Failed to start LSB: Start Tomcat (CloudStack)..
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Unit entered failed state.
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Failed with result 'exit-code'.
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units.
What change can I make in vi /etc/init.d/cloudstack-management file?
Running on Ubuntu 16.4
Elastic version: 6.2.4
Kibana version: 6.2.4
Elastic is up and running on port 9200.
Kibana suddenly stopped working, I am trying to run the start command: sudo systemctl start kibana.service and I get the following error in the service stdout(journalctl -fu kibana.service):
Started Kibana.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Unit entered failed state.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Failed with result 'exit-code'.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Aug 27 12:54:33 ubuntuserver systemd[1]: Stopped Kibana.
No details on this log.
My yaml configuration has only this props:
server.port: 5601
server.host: "0.0.0.0"
I also have tried writing to a log file(hoping for more info there), I tried adding this configurations:
logging.dest: /var/log/kibana/kibana.log
logging.verbose: true
I gave the folder/file full access control but nothing is being written there(still writing to the stdout)