We are running Caddy 2.4.6 in a Linux VMs and are trying to get the logs output into Azure Log Analytics via rsyslog. I have added the facilities for all log levels for syslog, user, local0 through local7. And confirmed this via /etc/rsyslog.d/95-omsagent.conf
So far I have been able to confirm that the logs are going to /var/log/syslog, but have not been able to get these showing in Log Analytics. It should be said that other syslog messages are coming through such as
Is there something we are missing to get these running into say one of the local0-7 facilities?
Where the Caddy syslog record starts like
Feb 15 03:32:31 caddyvm-vm-dev-1 caddy[3158]: {"level":"info","ts":1644895951.5908177,"logger":"http.log.access","msg":"handled request","request":{"remote_addr"
And the Caddyfile
{
on_demand_tls {
ask https://myaskapi.com/CaddyServer/VerifyDomainName
interval 5s
burst 5
}
}
https:// {
log
tls {
on_demand
}
reverse_proxy myreverseproxy.com {
header_up Host {upstream_hostport}
header_up X-Forwarded-Host {host}
}
}
:8080 {
respond "I am alive!" 200
}
Glad that you got the solution from caddy community forum supporting the answer provided by forum posting it here to help the other community members.
As per the documentation , we need to run the Caddy as Systemd Service. Caddy emits its logs to stdout/stderr, and systemd passes those to the journal.
Related
I'm using ngrok 3.1.1 and trying to open up port 8000 so I can do some local testing. However, I keep running into some issues.
First off, I've downloaded and installed ngrok from the official site, and then added by authorisation token using:
ngrok config add-authtoken blahblahblahcrazywordsmoustache
So far so good. Then, trying to open ngrok using:
ngrok http 8000
Yields the following errors:
reconnecting (x509: certificate signed by unknown authority)
Followed by:
reconnecting (jsonHTTP.Lookup: No such host: tunnel.ngrok.com)
And...
reconnecting (resolved tunnel.ngrok.com has no records)
The ngrok.yml looks like this:
root_cas: trusted
version: "2"
authtoken: ohlooksomelettersarenttheynice
Any idea what I can do? This is on a corporate network, with various firewalls etc. I'm told that ngrok will create a url that I can use in my code tests, but we can't whitelist that url until we know what it is, and we don't know what it is until ngrok starts and generates it.
Okay, not quite a solution but more of a work-around.
Disconnected my computer from the corporate network, used a wifi dongle and hotspotted to my phone.
Got an error saying that my account wasn't authorised to use custom CAs. Wracked my head for a bit until I remembered that I had seen cas before, in the yml file. Removed the
root_cas: trusted
from the yml, and all working fine and dandy.
I have done jmeter load testing and it was working fine.
Now i am doing jmeter load testing with ngnix by configuring two server details .
I am testing my application by giving the 1000 concurrent users with nginx url.
Sometimes it was working and sometimes not.
Why it is behaving like that??
Check the logs — nginx logs, your app and server logs and JMeter logs. Probably you can find answer there:
Check nginx.conf and find error_log settings to get current nginx error log location.
What do you mean "sometimes not"? What kind of errors, statuses and responces do you get?
You may post in here.
I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.
I am running a plex media server for private use on a Ubuntu 16.04 Desktop VM at home. I use it while I'm away on work during the week.
Recently I've been plagued with connection issues. Sometimes it's plex itself that crashes and needs to be restarted and sometimes it's the internet connection (eth0) that needs to be restarted.
I need a little help with a script that I can call via cron to check if the server is remote accessible, if it can reach https://external.address:32400 (please note it only responds to https), if it's not accessible, restart the internet connection (eth0), then check again if it's remote accessible, if it's still not remote accessible then restart the plex media server.
Plex is installed as a service so the call service plexmediaserver restart is how I restart it. I guess as it's a desktop instalation to restart the network the script needs to use service network-manager restart.
I found this post and script but it's very old and outdated.
Hopefully someone can help me out with this.
Thanks in advance.
ok, after doing a little more research on my problems, it turns out I have two different issues, sometimes the VM looses it's bridged connection and sometimes the plex media server crashes.
So, I have split the solution into two simple bash scripts that I call from cron.
The first checks if the internet is working, if not it restarts the VM. Simply restarting network-manager didn't work.
#!/bin/bash
PATH=/opt/someApp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/$
#Check if the vm can access google.com, if yes then exit
if nc -zw1 google.com 80;
then exit
#If it can't reach google.com restart the vm
else shutdown -r now
fi
The second script checks to see if it can access plex media server locally, if not then it restarts the plex service.
#!/bin/bash
PATH=/opt/someApp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/$
#Check to see that plex is acessable locally, if yes then exit
if curl -s --head --request GET http://localhost:32400 | grep "200 OK" > /dev/$
then exit
#If not then restart plex service
else
service plexmediaserver restart
fi
Thanks for your suggestions. This is far from an elegant solution, but as I'm pressed for time, it's a solution.
You can use this snippet I found here. You'd put in the IP address you're looking for, and then check the status code against the values found here. If get_status_code returns a code 200, you have remote access.
import httplib
def get_status_code(host, path="/"):
""" This function retreives the status code of a website by requesting
HEAD data from the host. This means that it only requests the headers.
If the host cannot be reached or something else goes wrong, it returns
None instead.
"""
try:
conn = httplib.HTTPConnection(host)
conn.request("HEAD", path)
return conn.getresponse().status
except StandardError:
return None
I've been trying to stream flv content from my openshift cartridge using nginx + rtmp module.
On my local machine, with the attached configuration, everything works just fine (I use ffplay for testing, e.g. ffplay rtmp://localhost:8080/test/streamkey)
When I try with the same configuration on openshift, I get the following error:
HandShake: Type mismatch: client sent 3, server answered 60 f=0/0
RTMP_Connect1, handshake failed.
However, if I enable port-forwarding and test the stream server using ffplay rtmp://127.0.0.1:8080/test/streamkey, everything works fine. here are my port forwardings:
rhc port-forward myappname
Checking available ports ... done
Forwarding ports ...
To connect to a service running on OpenShift, use the Local address
Service Local OpenShift
------- -------------- ---- -----------------
nginx 127.0.0.1:8080 => 127.10.103.1:8080
My cartridge is a "diy-0.1" cartridge. nginx 1.7.6 (also tested 1.4.4) + rtmp-module.
I suspect there are some issues with some proxy (apache?) that uses openshift for handling gears, maybe it does not allow rtmp headers(?)?
NB: Configuring nginx http-only works fine.
Can anybody help? I'm stuck, I think this is the first time I ask something on stackoverflow :-)
The nginx configuration (NB: the "play" path and the IP:PORT are taken using the openshift environment variables.):
rtmp {
server {
listen 127.10.103.1:8080;
chunk_size 8192;
application test {
play /var/lib/openshift/54da37644382ece45c000139/app-root/runtime/repo/public;
}
}
}
There is an apache proxy in front of your application on OpenShift Online, and it is possible that the content is trying to be streamed as HTTP traffic instead of RTMP traffic, that is why you are getting the content mismatch, but if you do it through the port-forward, you are gaining direct access to your application and bypassing the proxy. That is why it works fine with the port forward. There is currently no way to bypass the apache reverse proxy through the public ip, please see this developer portal article for more information about how requests are routed to your application: https://developers.openshift.com/en/managing-port-binding-routing.html