When I tried to run pgbench, during the initialization phase, ran into an error that “This ALTER TABLE command is not yet supported.” See details below:
$ pgbench -i -U postgres -d postgres -p 5433 -h 127.0.0.1
NOTICE: table "pgbench_branches" does not exist, skipping
WARNING: Storage parameter fillfactor is unsupported, ignoring
NOTICE: table "pgbench_tellers" does not exist, skipping
WARNING: Storage parameter fillfactor is unsupported, ignoring
NOTICE: table "pgbench_accounts" does not exist, skipping
WARNING: Storage parameter fillfactor is unsupported, ignoring
NOTICE: table "pgbench_history" does not exist, skipping
creating tables...
10000 tuples done.
20000 tuples done.
30000 tuples done.
40000 tuples done.
50000 tuples done.
60000 tuples done.
70000 tuples done.
80000 tuples done.
90000 tuples done.
100000 tuples done.
**set primary key...
ERROR: This ALTER TABLE command is not yet supported**.
In YugaByte DB, currently, the PRIMARY KEY clause has to be specified as part of the CREATE TABLE statement, and cannot be added after the fact via an ALTER TABLE command.
We have made a recent change to the "pgbench" utility (that's bundled as part of the YugaByte DB distribution) to specify the PRIMARY KEY as part of the CREATE TABLE statement itself.
The relevant issue is:
https://github.com/YugaByte/yugabyte-db/issues/1774
The relevant commit:
https://github.com/YugaByte/yugabyte-db/commit/35b79bc35eede9907d917d72e516350a4f6bd281
Related
According to the picture from this page ceph splits objects on chunks and writes them on osds.
For viewing objects distribution with replication algorithm I can use commands like ceph pg dump or ceph osd map.
But I can't find commands to view data chunk distribution.
You can use the ceph-objectstore-tool to query offline OSDs and see which OSD contains which data chunk:
[ceph: root#pacific /]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-1/ --op list
["11.2s2",{"oid":"file","key":"","snapid":-2,"hash":779072666,"max":0,"pool":11,"namespace":"","shard_id":2,"max":0}]
The first entry contains the shard ID: "11.2s2". So the PG is 11.2 and shard ID is 2. To query a cephadm deployed OSD you need to stop it via cephadm and then enter the container:
pacific:~ # cephadm unit stop --name osd.1
Inferring fsid 0a8034bc-15f4-11ec-8330-fa163eed040c
pacific:~ # cephadm shell --name osd.1
I use Azure CLI to created a container in database but it's always through for me an error like this:
[\"The partition key component definition path 'C:\\/Program Files\\/Git\\/zip1' could not be accepted, failed near position '0'. Partition key paths must contain
only valid characters and not contain a trailing slash or wildcard character."]}
My azure cli is like this:
az cosmosdb sql container create --account-name testaccjul1 --resource-group demo-app-test --database-name cosdbjul2 --name container1 --partition-key-path '/zip' --throughput 400
Could anyone tell me where I am wrong? thank you for you help
Please put the partition key path in double quotes. Something like:
az cosmosdb sql container create --account-name testaccjul1 --resource-group demo-app-test --database-name cosdbjul2 --name container1 --partition-key-path "/zip" --throughput 400
This is based off of example here.
I observe a strange situation in Windows 10 with XAMPP Control Panel v3.2.4
Terminal window is launched and connected to Mysql DB.
Note: prior in terminal window issued command 'chcp 65001' to support UTF8 encoding.
Now when I attempt to update some table with value which is in Cyrillic then MySQL complains about not closed quote symbol. If I replace Cyrillic input to English then command is accepted.
MariaDB [youtube]> update episodes set name='Катя' where id=11;
'>
If I attempt to insert a new record into DB same situation happens
MariaDB [youtube]> insert into episodes (youtube_id,series_id,season,episode,name) values (12345678904,1,0,1,'Катя');
'>
If double quotes are used situation is the same
MariaDB [youtube]> insert into episodes (youtube_id,series_id,season,episode,title) values (12345678904,1,0,1,"Катя");
">
What a magic touch required to make it work through terminal window?
Update:
John suggested to look into configuration file of MariaDB for UTF8 settings.
The settings was changed to the following and the problem still persists
# The MySQL server
default-character-set=utf8mb4
[mysqld]
init-connect=\'SET NAMES utf8\'
character_set_server=utf8
collation_server=utf8_unicode_ci
skip-character-set-client-handshake
character_sets-dir="C:/bin/XAMPP/App/xampp/mysql/share/charsets"
Initially settings was
# The MySQL server
default-character-set=utf8mb4
[mysqld]
character-set-server=utf8mb4
collation-server=utf8mb4_general_ci
Server status report
MariaDB [youtube]> \s
--------------
mysql Ver 15.1 Distrib 10.4.10-MariaDB, for Win64 (AMD64), source revision c24ec3cece6d8bf70dac7519b6fd397c464f7a82
Connection id: 17
Current database: youtube
Current user: root#localhost
SSL: Not in use
Using delimiter: ;
Server: MariaDB
Server version: 10.4.10-MariaDB mariadb.org binary distribution
Protocol version: 10
Connection: localhost via TCP/IP
Server characterset: utf8
Db characterset: utf8mb4
Client characterset: utf8mb4
Conn. characterset: utf8mb4
TCP port: 3306
Uptime: 11 min 12 sec
Threads: 7 Questions: 59 Slow queries: 0 Opens: 22 Flush tables: 1 Open tables: 16 Queries per second avg: 0.087
--------------
MariaDB Documentation has reference to an option --default-character-set=name.
An attempt to use --default-character-set=utf8mb4 on command line had no effect on behavior of insert/update record in terminal client.
mysql -u root -p --default-character-set=utf8mb4 youtube
....
MariaDB [youtube]> update episodes set title='Катя' where id=11;
'>
I highly recommend getting a copy of the freeware program HeidiSQL. It's not perfect and even crashes occasionally though compared to everything else I've worked with? Oh boy, totally worth my time.
Secondly you want to make sure that you're using the following:
Database Character Set: utf8mb4
Database Collation: utf8mb4_unicode_520_ci
These have the greatest UTF-8 support from what I've read from Stackoverflow member Rick James, via his website. He's a database superstar here on SO so if you ever hire him dump buckets of money on his face. His site has a comparison chart. In fact it's been a while and 520 might have been superseded since I last checked.
To set/change the Database Character Set you will need to change the my.cnf configuration file for MariaDB, I recommend using Notepad++ for code editing. This should make any newly created databases use the correct encoding however you may have to go through and manually update the character sets and collations for databases, tables and table columns so do not forget to be thorough!
[client]
default-character-set = utf8mb4
[mysqld]
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_520_ci
Once you do that your queries with Russian/Greek/etc should work normally as they do with English. Keyword: should.
Since I only speak two languages (English and bad English) I encode all characters past a certain Unicode numeric point just to be certain. It'll take up a bit more space however there are sometimes characters added for languages to Unicode after the majority of the language has been defined thus potentially fragmenting language support. If you're interested comment and I'll go find that code for you.
I'm at a moderate level of comprehension and I'm no Rick James though I have about two or three dozen translation pages (use the search feature and search for 'translation') on the site in my profile if you want to see the output. After I did these things I stopped having the data get corrupted. I hope this helps!
I've downloaded the PHP formula by following the instructions here: https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
I've changed apache to php. In my salt config file (which I assume is /etc/salt/master), I've set file_roots like so:
file_roots:
base:
- /srv/salt
- /srv/formulas/php-formula
I don't know how I'm supposed to run it now. I've successfully run a salt state file by discovering that the documentation is incomplete, so I'd missed a step I wasn't aware of.
If I try to run the formula the same way I've been running the state, I just get errors.
salt '*' state.apply php-formula
salt-minion:
Data failed to compile:
----------
No matching sls found for 'php-formula' in env 'base'
ERROR: Minions returned with non-zero exit code
I've also tried: sudo salt '*' state.highstate, and it also has errors:
salt-minion:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes:
Summary for salt-minion
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 0.000 ms
ERROR: Minions returned with non-zero exit code
You have to add a top.sls file to /srv/salt/, not just in /srv/pillar/. If you have a file called /srv/salt/php.sls, you have remove it, otherwise it will interfere with /srv/pillar/php.sls.
Contents of /srv/salt/top.sls:
base:
'*':
- php
This is kind of bizarre, because my previous test (which wasn't a formula) used /srv/salt/php.sls and /srv/pillar/top.sls. Now I'm using /srv/pillar/php.sls and /srv/salt/top.sls.
Say my program attempts a read of a byte in a file on a ZFS filesystem. ZFS can locate a copy of the necessary block, but cannot locate any copy with a valid checksum (they're all corrupted, or the only disks present have corrupted copies). What does my program see, in terms of the return value from the read, and the byte it tried to read? And is there a way to influence the behavior (under Solaris, or any other ZFS-implementing OS), that is, force failure, or force success, with potentially corrupt data?
EIO is indeed the only answer with current ZFS implementations.
An open ZFS "bug" asks for some way to read corrupted data:
http://bugs.opensolaris.org/bugdatabase/printableBug.do?bug_id=6186106
I believe this is already doable using the undocumented but open source zdb utility.
Have a look at http://www.cuddletech.com/blog/pivot/entry.php?id=980 for explanations about how to dump a file content using zdb -R option and "r" flag.
Solaris 10:
# Create a test pool
[root#tesalia z]# cd /tmp
[root#tesalia tmp]# mkfile 100M zz
[root#tesalia tmp]# zpool create prueba /tmp/zz
# Fill the pool
[root#tesalia /]# dd if=/dev/zero of=/prueba/dummy_file
dd: writing to `/prueba/dummy_file': No space left on device
129537+0 records in
129536+0 records out
66322432 bytes (66 MB) copied, 1.6093 s, 41.2 MB/s
# Umount the pool
[root#tesalia /]# zpool export prueba
# Corrupt the pool on purpose
[root#tesalia /]# dd if=/dev/urandom of=/tmp/zz seek=100000 count=1 conv=notrunc
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0715209 s, 7.2 kB/s
# Mount the pool again
zpool import -d /tmp prueba
# Try to read the corrupted data
[root#tesalia tmp]# md5sum /prueba/dummy_file
md5sum: /prueba/dummy_file: I/O error
# Read the manual
[root#tesalia tmp]# man -s2 read
[...]
RETURN VALUES
Upon successful completion, read() and readv() return a
non-negative integer indicating the number of bytes actually
read. Otherwise, the functions return -1 and set errno to
indicate the error.
ERRORS
The read(), readv(), and pread() functions will fail if:
[...]
EIO A physical I/O error has occurred, [...]
You must export/import the test pool because, if not, the direct overwrite (pool corruption) will be missed since the file will still be cached in OS memory.
And no, currently ZFS will refuse to give you corrupted data. As it should.
How would returning anything but an EIO error from read() make sense outside a file system specific low level data rescue utility?
The low level data rescue utility would need to use an OS and FS specific API other than open/read/write/close to to access the file. The semantics it would need are fundamentally different from reading normal files, so it would need a specialized API.