Keystone log files always empty - openstack

I am walking through the Ubuntu walk-through for installing OpenStack. I am past the keystone stage and I find that it creates log files in /var/log/keystone which are always zero length. I also get the following message in the response to many commands that otherwise work:
No handlers could be found for logger "keystoneclient.v2_0.client"
-- this may or may not be related. Any advice for a noob appreciated.
This is the Folsom release.

Keystone uses /etc/keystone/logging.conf to control logging levels. By default, the keystone logger will only show WARNING level messages and above:
[logger_root]
level=WARNING
handlers=file
If you change level to something lower (e.g., INFO, DEBUG) then restart the keystone service, you should see log messages appear in /var/log/keystone/keystone.log

Related

Symfony logging with Monolog, confused about STDERR

I am trying to align my logging with the best practice of using STDERR.
So, I would like to understand what happens with the logs sent to STDERR.
Symfony official docs (https://symfony.com/doc/current/logging.html):
In the prod environment, logs are written to STDERR PHP stream, which
works best in modern containerized applications deployed to servers
without disk write permissions.
If you prefer to store production logs in a file, set the path of your
log handler(s) to the path of the file to use (e.g. var/log/prod.log).
This time I want to follow the STDERR stream option.
When I was writing to a specific file, I knew exactly where to look for that file, open it and check the logged messages.
But with STDERR, I don't know where to look for my logs.
So, using monolog, I have the configuration:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
excluded_http_codes: [404, 405]
nested:
type: stream
path: "php://stderr"
level: debug
Suppose next morning I want to check the logs. Where would I look?
Several hours of reading docs later, my understanding is as follows:
First, the usage of STDERR over STDOUT is preferred for errors because it is not buffered (gathering all output waiting for the script to end), thus errors are thrown immediately to the STDERR stream. Also, this way the normal output doesn't get mixed with errors.
Secondly, the immediate intuitive usage is when running a shell script, because in the Terminal one will directly see the STDOUT and STDERR messages (by default, both streams output to the screen).
But then, the non-intuitive usage of STDERR is when logging a website/API. We want to log the errors, and we want to be able to monitor the already occurred errors, that is to come back later and check those errors. Traditional practice stores errors in custom defined log-files. More modern practice recommends sending errors to STDERR. Regarding Symfony, Fabien Potencier (the creator of Symfony), says:
in production, stderr is a better option, especially when using
Docker, SymfonyCloud, lambdas, ... So, I now recommend to use
php://stderr
(https://symfony.com/blog/logging-in-symfony-and-the-cloud).
And he further recommends using STDERR even for development.
Now, what I believe to be missing from the picture (at least for me, as non-expert), is the guidance on HOW to access and check the error logs. Okay, we send the errors to STDERR, and then? Where am I going to check the errors next morning? I get it that containerized platforms (clouds, docker etc) have specific tools to easily and nicely monitor logs (tools that intercept STDERR and parse the messages in order to organize them in specific files/DBs), but that's not the case on a simple server, be it a local server or on a hosting.
Therefore, my understanding is that sending errors to STDERR is a good standardization when:
Resorting to using a third-party tool for log monitoring (like ELK, Grail, Sentry, Rollbar etc.)
When knowing exactly where your web-server is storing the STDERR logs. For instance, if you try (I defined a new STD_ERR constant to avoid any pre-configs):
define('STD_ERR', fopen('php://stderr', 'wb'));
fputs(STD_ERR, "ABC error message.");
you can find the "ABC error message" at:
XAMPP Apache default (Windows):
..\xampp\apache\logs\error.log
Symfony5 server (Windows):
C:\Users\your_user\.symfony5\log\ [in the most recent folder, as the logs rotate]
Symfony server (Linux):
/home/your_user/.symfony/log/ [in the most recent folder, as the logs rotate]
For Symfony server, you can actually see the logs paths when starting the server, or by command "symfony server:log".
One immediate advantage is that these STDERR logs are stored outside of the app folders, and you do not need to maintain extra writable folders or deal with the permissions etc. Of course, when developing/hosting multiple sites/apps, you need to configure the error log (the STDERR storage) location per app (in Apache that would be inside each <VirtualHost> conf ; with Symfony server, I am not sure). Personally, without a third-party tool for monitoring logs, I would stick with custom defined log files (no STDERR), but feel free to contradict me.

WCF with HTTP on dynamic port number

I have a need to do automated testing of a configuration of WCF bindings. I wrote a test that, in it setup, picks a random port number and binds to it with a WSHttpBinding. The test runs a ServiceHost for the duration of its execution and then shuts it down. This works, but then when the build agents try to run the test, I get this error:
System.Exception: Unable to set up service host ---> System.ServiceModel.AddressAccessDeniedException: HTTP could not register URL http://+:52361/Test/. Your process does not have access rights to this namespace (see http://go.microsoft.com/fwlink/?LinkId=70353 for details). ---> System.Net.HttpListenerException: Access is denied
Is there any way to work around this? Can this "urlacl" mechanism be disabled??
UPDATE: This was a wild goose chase, as it turns out. This wasn't the error that was happening on the build agents. I flubbed it when gathering that information. Turns out the build agents are running elevated and don't run into the urlacl problem. The actual problem I was encountering was that a NuGet reference somehow hadn't had its corresponding assembly reference added to the .csproj file. How the tests worked locally, I don't know!
The error is "The process does not have permission to access this namespace".
You can try the following methods.
Make Test public
Run the service as administrator
Run the command prompt as administrator, add URL to ACL netsh http add urlacl url=
https://www.codeproject.com/Questions/441371/When-Hosting-the-WCF-service-i-got-exception
WCF ServiceHost access rights

Google Cloud Composer (Apache Airflow) cannot access log files

I'm running a DAG in Google Cloud Composer (hosted Airflow) which runs fine in Airflow locally. All it does is print "Hello World". However, when I run it through Cloud Composer I receive the error:
*** Log file does not exist: /home/airflow/gcs/logs/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log
*** Fetching from: http://airflow-worker-d775d7cdd-tmzj9:8793/log/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-d775d7cdd-tmzj9', port=8793): Max retries exceeded with url: /log/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8825920160>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I've also tried making the DAG add data into a database and it actually succeeds 50% of the time. However, it always returns this error message (and no other print statements or logs). Any help much appreciated on why this might be happening.
We also faced the same issue then raised a support ticket to GCP and got the following reply.
The message is related to the latency of syncing logs from Airflow workers to WebServer, it takes at least some minutes (depending on the number of objects and their size)
The total log size seems not large but it’s enough to noticeably slow down synchronization, hence, we recommend cleanup/archive the logs
Basically we recommend relying on Stackdriver logs instead, because of latency due to the design of this sync
I hope this will help you solve the problem.
I have the same problem after upgrading from 1.10.3 to 1.10.6 of Google Composer.
I can see in my logs that airflow is trying to get the logs from a bucket with a name ended with -tenant while the bucket in my account ends with -bucket
In the configuration, I can see something weird too.
## airflow.cfg
[core]
remote_base_log_folder = gs://us-east1-dada-airflow-xxxxx-bucket/logs
## also in the running configuration says
core remote_base_log_folder gs://us-east1-dada-airflow-xxxxx-tenant/logs env var
I wrote to google support and they said the team is working on a fix.
EDIT:
I've been accessing my logs with gsutil and replacing the bucket name suffix to -bucket
gsutil cat gs://us-east1-dada-airflow-xxxxx-bucket/logs/...../5.logs
I faced the same situation in multiple occasions.
As soon as when the job finished when I take a look at the log on Airflow Web UI, it used to give me the same error. Although when I check back the same logs on UI after a min or 2, I could see the logs properly.
As per the above answers, its a sync issue between the webserver and the Worker node.
In general, the issue describe here should be more like a sporadic issue.
In certain situations, what could help is setting default-task-retries to a value that allows for retrying a task at least 1.
This issue is resolved at least since Airflow version: 1.10.10+composer.

ERROR: (gcloud.compute.ssh) Could not fetch resource: - The resource was not found

I'm trying to run R on Google Cloud following Google's suggested tutorial. However, I have experienced some trouble when finally creating the cluster. When creating the cluster with
elasticluster start myslurmcluster
I get the following error message
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- The resource 'projects/MY_PROJECT/zones/us-central1-b/instances/myslurmcluster-frontend001' was not found
I had run through previous stages of the tutorial several times with no problems but I suspect the issue might be related to the SSH keys so that I can sign in to my cluster.
Any help or advice greatly received!
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- The resource 'projects/MY_PROJECT/zones/us-central1-b/instances/myslurmcluster-frontend001' was not found
The error you are getting means that when you are trying to compute SSH, your resource was not found. The reason for this case is the instance zone and gcloud default zone are different. The command line didn’t specified the instance zone. So the google cloud compute default zone was used. Obviously, The instance should not be found in the default zone. Just adding the zone option in command could solve the problem. The command format is like:
gcloud compute --project "MY_PROJECT" ssh --zone "us-central1-b" "myslurmcluster-frontend001"
To see what your default region and zone settings are, run the following gcloud command:
gcloud compute project-info describe --project [PROJECT_ID]
where [PROJECT_ID] is your own project ID.
Hope this helps... Easy way to find out instance zones without having to type/know instance name.
gcloud compute instances list
Should list instances and information - including (time)ZONE.

Could not create internal topics - Stream-thread exception

I am trying to execute a simple Wordcount stream application but I face the error "Could not create internal topics - Stream-thread exception"
I have seen a similar thread but that seems to be more of a network issue.
Here is no security enabled on the kafka broker.
Only one broker is configured and still this issue.
Can someone let me know how to fix this?
Clean your temporary kafka queues.
Run --list command on kafka to see all the queues starting with your names and ending with -changelog & -repartition and manually run delete on them.
This one worked for me.
Also, check your settings on delete.topic.enable for actual deletion happening. It was not the default setting until 1.0.0 - see https://issues.apache.org/jira/browse/KAFKA-5384
i have connected to kafka using kafka tool and delete them manually

Resources