How to sync / update local (volume) data with OpenStack server? - openstack

I want to create a server within OpenStack nova.
The first step is to create a volume from an image:
volume = cinder.volumes.create(5, name="test", imageRef=some_id, ...)
The volume will then for some time in the state 'creating'. Calling nova.servers.create with a volume in creating state fails:
novaclient.exceptions.BadRequest: Block Device f2fe64ee-f049-4a6f-8edd-52579d82fc23 is not bootable. (HTTP 400) (Request-ID: req-f036d084-e9c8-4bdf-b266-73fbbe993796)
My idea is to wait until the volume gets available:
while volume.status != 'available':
print("Volume status [%s]" % volume.status)
time.sleep(1.0)
But it looks that the volume data itself is locally cached and never gets updated - even if the GUI and CLI shows, that the volume is already available.
Is there a (simple) way to synchronize the local data with the remote state? Like:
volume.sync()

Found the answer in the document called 'Python APIs: The best-kept secret of OpenStack':
There is the need to update / get the whole volume again:
while volume.status != 'available':
print("Volume status [%s]" % volume.status)
time.sleep(1.0)
volume = cinder.volumes.get(volume.id)

Related

mariadb slow query not logged

As the title suggests, no log is recorded in the log file even though the related settings have been completed.
slow_query_log_file = /var/log/mysql/mariadb-slow.log
slow_query_log = 1
long_query_time = 1
log_slow_rate_limit = 1000
log_slow_verbosity = query_plan
log-queries-not-using-indexes
This is mariadb's conf content.
When you open the log file, only the basics exist.
Tcp port: 3306 Unix socket: /run/mysqld/mysqld.sock
Time Id Command Argument
logrotate seems to work fine.
After connecting to mysql, I used select sleep(); but it did not work properly.
The result after using the command is 0, which seems to be normal, but the log is not recorded.
Why wouldn't it work?
The new settings will apply only if the MariaDB server instance is restarted. Therefore, the solution, as mentioned in the comment, is to restart the MariaDB server instance in order to apply the new settings.

Imaging Backend not working in eucalyptus

I have installed eucalyptus 4.4.4 on Centos 7 and I have already done all installation steps but it is still showing "imagingbackend" as not ready.
The imaging service is not always required when using eucalyptus. Particularly for smaller deployments it can be a good choice to skip configuration of the imaging service and save the resources (virtual machine instances) that would have been used by the imaging service for user workloads.
To enable the imaging service you need to install and register the service image (v5 output shown):
# esi-describe-images
SERVICE VERSION ACTIVE IMAGE INSTANCES
imaging 5.0.100 * emi-b54e3b35170d2c56e 1
loadbalancing 5.0.100 * emi-b54e3b35170d2c56e 0
and create the stack:
# esi-manage-stack -a check
Stack 'euca-internal-imaging-service' currently is in CREATE_COMPLETE state.
the steps to get to this state are covered in the documentation:
http://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configure_imaging_service.html
You can also use esi-manage-stack to delete / create the imaging stack:
# esi-manage-stack -a delete
services.imaging.worker.configured = false
# esi-manage-stack -a create
Stack 'euca-internal-imaging-service' currently is in DELETE_IN_PROGRESS state. Please wait till the end of previous stack change operation.
# esi-manage-stack -a create
services.imaging.worker.configured = true
#
If you have gone through these steps but the service is not enabled you should do basic checks to verify that you can run any instances in your cloud:
# euca-describe-instance-types --show-capacity
# euserv-describe-events
and also check the log for errors /var/log/eucalyptus/cloud-debug.log

Task fails due to not being able to read log file

Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured

Airflow: How to setup log directory?

I upload a dag file to the web page and when I click 'Graph View' -> ${my_dag} -> 'View Log', it shows:
*** Log file isn't local.
*** Fetching here: http://:8793/log/demo_dag/hello_task/2018-11-14T15:06:00
*** Failed to fetch log file from worker.
*** Reading remote logs...
*** Unsupported remote log location.
I have checked the airflow.cfg and find these config info:
worker_log_server_port = 8793
base_log_folder = /root/airflow/logs
My question is:
How to setup IP address for log service (Only port is setup)?
I have setup directory for log service, why does it still go to /log/.. ?
Any help is appreciated.
This can happen when the task status was manually changed (likely through the "Mark Success" option) and the task never receives a hostname value on the record.
The webserver is attempting to reach out to a server, with no name, to get logs for a task that never ran.
PS: Be careful running processes as the root user.
I've been getting this error, fix it by correcting the socket volume path:
WARNING - OSError while attempting to symlink the latest log directory
In windows the volume will go with a double bar like this:
volumes:
- //var/run/docker.sock:/var/run/docker.sock
Bind to docker socket on Windows
Setting up Airflow to run with Docker Swarm’s orchestration

How to input many big static json files into logstash?

My inputs are hundreds of big 1-line json files (~10MB-20MB).
After getting out-of-memory errors with my real setup (with two custom filters), I simplified the setup to isolate the problem.
logstash --verbose -e 'input { tcp { port => 5000 } } output { file { path => "/dev/null" } }'
`
My test input is a multi-level nested object in json:
$ ls -sh example_fixed.json
9.7M example_fixed.json
If I send the file once, it works fine. But if I do:
$ repeat 50 cat example_fixed.json|nc -v localhost 5000
I get the error message:
Logstash startup completed
Using version 0.1.x codec plugin 'line'. This plugin isn't well supported by the community and likely has no maintainer. {:level=>:info}
Opening file {:path=>"/dev/null", :level=>:info}
Starting stale files cleanup cycle {:files=>{"/dev/null"=>#<IOWriter:0x6f51765 #active=true, #io=#<File:/dev/null>>}, :level=>:info}
Error: Your application used more memory than the safety cap of 500M.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
I have determined that the error triggers if I send the input more than 30 times with a heap size of 500MB. If I increase heap size, this limit goes up accordingly.
However, from documentation I understand logstash should be able to throttle the input when it cannot process the events quickly enough.
In fact, If I do a sleep 0.1 after sending new events, it can handle up to 100 repetitions. But not 1000. So I assume this means the input is not being throttled properly, and whenever the input rate is higher than the processing rate, it's a matter of time before the heap is filled and logstash crashes.

Resources