How can i print complete query output without line breaks in output file - unix

It's printing output but once 1024 rows crossed it takes line break and prints output
please suggest how can we handle without line break as below below output:
used:
(impala-shell -k --ssl -i <impala_url> -c -f sample_202112.sql 2>&1)> sample_202112.txt
printing as below:
| VOL | BC | VOL00109 | VA-V5B | APR22DM |
| VOL | BC | VOL00109 | VC | APR22DM |
| VOL | BC | VOL00109 | VE-V5G | APR22DM |
| VOL | BC | VOL00109 | VH | APR22DM |
| VOL | BC | VOL00109 | VJ | APR22DM |
| VOL | BC | VOL00104 | VK-V7G | APR22DM |
| VOL | BC | VOL00103 | VL | APR22DM |
+-----------+---------------+-----------+--------------------+---------+
+-----------+-----------+-----------+------------+---------+
| zone_code | zone_desc | zone_code | data | dm |
+-----------+-----------+-----------+------------+---------+
| VOL | BC | VOL00109 | VM | APR22DM |
| VOL | BC | VOL00103 | VN | APR22DM |
| VOL | BC | VOL00103 | VP | APR22DM |
| VOL | BC | VOL00103 | VR | APR22DM |
| VOL | BC | VOL00103 | S | APR22DM |
+-----------+-----------+-----------+------------+---------+

Related

Index behavior in select not using the key

This is my database table llamados
+-------------+----------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+----------------------+------+-----+---------+-------+
| prospecto | int(10) unsigned | NO | PRI | NULL | |
| bis | tinyint(2) unsigned | NO | PRI | NULL | |
| id_usr | smallint(4) unsigned | NO | MUL | NULL | |
| fecha | date | NO | | NULL | |
| respuesta | tinyint(3) unsigned | NO | | NULL | |
| descripcion | char(255) | NO | | NULL | |
| hora_inicio | time | NO | | NULL | |
| hora_fin | time | NO | | NULL | |
+-------------+----------------------+------+-----+---------+-------+
And this are indexes
+---------------------+--------------+-------------+
| Key_name | Seq_in_index | Column_name |
+---------------------+--------------+-------------+
| PRIMARY | 1 | prospecto |
| PRIMARY | 2 | bis |
| prospecto_fecha_idx | 1 | prospecto |
| prospecto_fecha_idx | 2 | fecha |
| prospecto_fecha_idx | 3 | hora_inicio |
| usr_idx | 1 | id_usr |
+---------------------+--------------+-------------+
I testing this select statement
explain EXTENDED select * from llamados where id_usr = 2;
+------+-------------+----------+------+---------------+------+---------+------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+------+-------------+----------+------+---------------+------+---------+------+------+----------+-------------+
| 1 | SIMPLE | llamados | ALL | usr_idx | NULL | NULL | NULL | 37 | 62.16 | Using where |
+------+-------------+----------+------+---------------+------+---------+------+------+----------+-------------+
I dont understand why is scaning ALL the table and not using usr_idx to filtered the records

why does the frequency of my Gnocchi measurements not match the set granularity

Im running openstack and am trying to get my gnocchi meters to come through more frequently so that I can run a scaling demo without lots of 5 minute lags. In Gnocchi I have changed the Archive-policy to be a custom policy with granularity set to 30 seconds (I've also tried the following using the existing 'medium' policy and it has the same result)
+---------------------+--------------------------------------------------------+
| Field | Value |
+---------------------+--------------------------------------------------------+
| aggregation_methods | std, count, min, max, sum, mean |
| back_window | 0 |
| definition | - points: 120, granularity: 0:00:30, timespan: 1:00:00 |
| name | test |
+---------------------+--------------------------------------------------------+
the cpu_util meter is picking it up correclty
+------------------------------------+-------------------------------------------------------------------+
| Field | Value |
+------------------------------------+-------------------------------------------------------------------+
| archive_policy/aggregation_methods | std, count, min, max, sum, mean |
| archive_policy/back_window | 0 |
| archive_policy/definition | - points: 120, granularity: 0:00:30, timespan: 1:00:00 |
| archive_policy/name | test |
| created_by_project_id | e499d0c2e0fb4a05ac39c3f8c260052b |
| created_by_user_id | 21759a51f3834b9bbae49c3ed17a13e4 |
| creator | 21759a51f3834b9bbae49c3ed17a13e4:e499d0c2e0fb4a05ac39c3f8c260052b |
| id | e5a02f3a-9fbe-4e44-bb91-e1cfe6b86143 |
| name | cpu_util |
| resource/created_by_project_id | e499d0c2e0fb4a05ac39c3f8c260052b |
| resource/created_by_user_id | 21759a51f3834b9bbae49c3ed17a13e4 |
| resource/creator | 21759a51f3834b9bbae49c3ed17a13e4:e499d0c2e0fb4a05ac39c3f8c260052b |
| resource/ended_at | None |
| resource/id | 243b9715-95ba-4532-9728-3e61776e1c29 |
| resource/original_resource_id | 243b9715-95ba-4532-9728-3e61776e1c29 |
| resource/project_id | 43a7db62d5d54c4590e363868fff49e2 |
| resource/revision_end | None |
| resource/revision_start | 2018-08-08T14:05:09.770765+00:00 |
| resource/started_at | 2018-08-08T13:20:45.948842+00:00 |
| resource/type | instance |
| resource/user_id | 4e5015006b304e7ca57edc5419b42be3 |
| unit | % |
+------------------------------------+-------------------------------------------------------------------+
but the measurements are still only coming out every 5 min
gnocchi measures show e5a02f3a-9fbe-4e44-bb91-e1cfe6b86143
+---------------------------+-------------+--------------+
| timestamp | granularity | value |
+---------------------------+-------------+--------------+
| 2018-08-08T13:30:00+00:00 | 30.0 | 0.0400002375 |
| 2018-08-08T13:35:00+00:00 | 30.0 | 0.0366666763 |
| 2018-08-08T13:40:00+00:00 | 30.0 | 0.0366667101 |
| 2018-08-08T13:45:00+00:00 | 30.0 | 0.0399999545 |
| 2018-08-08T13:50:00+00:00 | 30.0 | 0.0366664861 |
| 2018-08-08T13:55:00+00:00 | 30.0 | 0.0400000543 |
| 2018-08-08T14:00:00+00:00 | 30.0 | 0.0366665877 |
+---------------------------+-------------+--------------+
any ideas what I am missing?
I had the same issue. In Gnocchi-backed Ceilometer there is a new configuration file: polling.yaml. Resources polling interval is being set there.
https://review.opendev.org/#/c/405682/
https://docs.openstack.org/ceilometer/pike/admin/telemetry-best-practices.html

No 'users' table in nova database

After successful installation of Openstack RDO in multiple hosts, the mysql database doesn't show any users in nova.
Command SHOW TABLES; in the database does not show user or users table.
MariaDB [nova]> SHOW TABLES;
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
109 rows in set (0.00 sec)
user table is in keystone database and not in nova database.
MariaDB [keystone]> show tables;
+------------------------+
| Tables_in_keystone |
+------------------------+
..........................
| trust |
| trust_role |
| user |
..........................

How to make a multiple corpora in R

This is a car review data which has more than 40,000 rows and each review has more than 500 characters. This is sample data : https://drive.google.com/open?id=1ZRwzYH5McZIP2NLKxncmFaQ0mX1Pe0GShTMu57Tac_E
| brand | review | favorite | c4 | c5 | c6 | c7 | c8 |
| brand1 | 500 characters1 | 100 characters1 | | | | | |
| brand2 | 500 characters2 | 100 Characters2 | | | | | |
| brand2 | 500 characters3 | 100 Characters3 | | | | | |
| brand2 | 500 characters4 | 100 Characters4 | | | | | |
| brand3 | 500 characters5 | 100 Characters5 | | | | | |
| brand3 | 500 characters6 | 100 characters6 | | | | | |
I'd like to merge review column by brands like this :
| Brand | review | favorite | c4 | c5 | c6 | c7 | c8 |
| brand1 | 500 characters1 | 100 characters1 | | | | | |
| brand2 | 500 characters2 | 100 Characters2 | | | | | |
| | 500 characters3 | 100 Characters3 | | | | | |
| | 500 characters4 | 100 Characters4 | | | | | |
| brand3 | 500 characters5 | 100 Characters5 | | | | | |
| | 500 characters6 | 100 characters6 | | | | | |
So, I tired to use aggregate().
temp <- aggregate(data$review ~ data$brand , data, as.list )
But, It takes very long.
Is there any simple way to merge that?
Thank you in advance!
Try splitting them on each factor and then pasting them together. aggregate() is a horribly slow function and should be avoided for all but the smallest datasets.
This should do the trick: (note I downloaded your Google file as sampleDF.csv here)
sampleDF <- read.csv("~/Downloads/sampleDF.csv", stringsAsFactors = FALSE)
# aggregate text by brand
brand.split <- split(sampleDF$text, as.factor(sampleDF$Brand))
brand.grouped <- sapply(brand.split, paste, collapse = " ")
# aggregate favorite by brand
favorite.split <- split(sampleDF$favorite, as.factor(sampleDF$Brand))
favorite.grouped <- sapply(favorite.split, paste, collapse = " ")
newDf <- data.frame(brand = names(brand.split),
text <- favorite.grouped,
favorite <- favorite.grouped,
stringsAsFactors = FALSE)
If you want to bring in other variables they will need to vary at the brand level only.

Ceilometer identity.authenticate.failure

The command "ceilometer sample-list -m identity.authenticate.failure" give a response with a resource_id like :openstack:b64d526d-9624-4f3c-b986-3d6e275d0758.
What is this ID and it refers to what?
How can I have the user ID that belongs to this resource ID ?
The output is :
+------------------------------------------------+-------------------------------+-------+--------+------+----------------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+------------------------------------------------+-------------------------------+-------+--------+------+----------------------------+
| openstack:85b53cba-3b11-4019-ac70-087d7412bf15 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T13:10:18.848000 |
| openstack:c23f883d-0f6d-45a5-be7d-9ea2fc53c100 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T12:15:28.930000 |
| openstack:b64d526d-9624-4f3c-b986-3d6e275d0758 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T12:13:03.352000 |
| openstack:d8eae00c-76b0-4b9d-8e53-26f6e849cb83 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T11:56:53.429000 |
| openstack:faacab54-f3eb-43eb-861c-93f45f0b2ada | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T11:56:42.232000 |
| openstack:6303221f-f17a-4d82-9c43-83f0d9dc58c2 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T11:01:52.166000 |
| openstack:22a1f0a5-6109-4554-9a4a-178452667e18 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T11:01:47.553000 |
| openstack:c1fc85b3-5257-41d0-8bfa-67c2eb367f79 | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T09:56:58.225000 |
| openstack:4e79a88d-bda0-4438-ac76-81fab2a5e49d | identity.authenticate.failure | delta | 1.0 | user | 2015-05-25T09:56:29.472000 |
+------------------------------------------------+-------------------------------+-------+--------+------+----------------------------+

Resources