When I was about to deploy my Symfony4 app in ubuntu 18 php7.1-fpm + apache I execute some commands to load default data and some fixtures. The problem is that always receive SQLSTATE[22021]: Character not in repertoire: 7 ERROR: invalid byte sequence for encoding "UTF8": 0xcd 0x73 In the entities I noticed that are the fields which are mapped as array, json, or simple_array.
Here is an example of one of those fields value:
\x65\x6d\x70\x72\x65\x73\x61\x20\x64\x65\x20\x6
1\x73\x65\x67\x75\x72\x61\x6d\x69\x65\x6e\x74\x6f\x20\x6c\x6f\x67\xcd\x73\x74\x69\x63\x6f\x20\x61\x6c\x20\x74\x61\x62\x61\x63\x6f
That is the value for an array of string.
The database config is setted to UTF-8 also the php.ini configuration, the database server is created also using UTF-8.
How can I fix this? I've created the database several times but the same results remains.
Thanks in advance!!
UPDATE
When I repeat the process on Windows none of this happens...
UPDATE
Here the complete crash log
[2019-10-08 15:21:26] doctrine.DEBUG: INSERT INTO ext_log_entries (id, action, logged_at, object_id, object_class, version, data, username) VALUES (?, ?, ?, ?, ?, ?, ?, ?) {"1":2042,"2":"create","3":"2019-10-08 15:21:24","4":2042,"5":"App\\Entity\\SeaShipment","6":1,"7":{"manifest":"0323/2019","dmNumber":null,"arrivedAt":"2019-09-16 23:00:00","companyName":"MAQUIMPORT","agencyName":"MINAGRI","contractNumber":null,"merchandiseDescription":null,"countryName":null,"dmNumberAt":null,"etaAt":null,"funderName":null,"customerName":null,"empoweredName":null,"buyerName":null,"docsReceivedAt":null,"originalDocsReceivedAt":null,"billingDeliveredAt":null,"funderBilling":null,"deliveredCustomerAt":null,"isUpdatable":null,"createdFromIp":null,"lastUpdatedFromIp":null,"createdBy":null,"lastUpdatedBy":null,"createdAt":"2019-10-08 15:21:20","lastUpdatedAt":"2019-10-08 15:21:20","deletedAt":null,"seaShipmentType":null,"bl":"2019-M-001147","destinationDock":"TCM","isReleasedHouse":true,"isReleasedMaster":true,"isLocked":false,"isEnabled":true,"daysWithoutDm":0,"daysInTcm":3,"location":"B06","weight":8562,"yard":null,"cabotage":null,"transferedAt":"2019-09-16 14:25:00","transferedTo":"(binary value)","containerNumber":"MAGU5169507","containerType":"HC","containerDimention":40,"lastMarielReportAt":"2019-09-19 23:00:00","shippingCompanyName":"NIRINT","isActive":true,"shipName":null,"journey":null,"originDock":null,"blAt":null,"correspondentName":null,"forwarderName":null,"downloadUngroupAt":null,"beDeliveredAt":null,"packageQuantity":null,"shippingCompany":{"id":26}},"8":null} []
For other similar data or transactions before this one the problem is not happening
Can it be that your database doesn't accept cyrillyc/arabic etc alphabets ?
If yes that may help (if you use mysql):
Add to file etc/mysql/my.cnf:
[mysqld]
collation-server = utf8mb4_bin
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
skip-character-set-client-handshake
[client]
default-character-set = utf8mb4
[mysql]
default-character-set = utf8mb4
After that :
sudo service mysql restart
then drop database and create it from scratch.
Related
Airflow web page shows:
"The scheduler does not appear to be running. Last heartbeat was received 6 hours ago.
The DAGs list may not update, and new tasks will not be scheduled"
Airflow is inoperable. It appears I ran out of disk space. I've manually cleared log folder and now have disk space. When I run "airflow scheduler" I get error messages below. I do not know how to resolve.
airflow scheduler
[2023-02-10 21:10:54,079] {cli_action_loggers.py:105} WARNING - Failed to log action with (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Could not allocate space for object 'dbo.log'.'PK__log__3213E83F7F1F073F' in database 'airflow' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. (1105) (SQLExecDirectW)")
[SQL: INSERT INTO log (dttm, dag_id, task_id, event, execution_date, owner, extra) OUTPUT inserted.id VALUES (?, ?, ?, ?, ?, ?, ?)]
[parameters: (datetime.datetime(2023, 2, 10, 21, 10, 54, 51696, tzinfo=Timezone('UTC')), None, None, 'cli_scheduler', None, 'root', '{"host_name": "plappnx-1", "full_command": "[\'/usr/local/bin/airflow\', \'scheduler\']"}')]
(Background on this error at: http://sqlalche.me/e/14/f405)
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]Could not allocate space for object 'dbo.job'.'PK__job__3213E83F7D216A15' in database 'airflow' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. (1105) (SQLExecDirectW)")
[SQL: INSERT INTO job (dag_id, state, job_type, start_date, end_date, latest_heartbeat, executor_class, hostname, unixname) OUTPUT inserted.id VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: (None, <TaskInstanceState.RUNNING: 'running'>, 'SchedulerJob', datetime.datetime(2023, 2, 10, 21, 10, 54, 981528, tzinfo=Timezone('UTC')), None, datetime.datetime(2023, 2, 10, 21, 10, 54, 981540, tzinfo=Timezone('UTC')), 'SequentialExecutor', 'plappnx-1', 'root')]
The problem is not related to Airflow, neither the disk space, it's a DB problem, where you have added MAXSIZE when you created your DB, and the DB log (not Airflow log) has already reached this limit.
You can delete some of the DB log files to unblock you Airflow workload, but you need a persistent solution like increasing the MAXSIZE or setting it to unlimited.
Here is a blog which explain the problem and propose some solutions.
The following MariaDB statement uses JSON_EXTRACT to turn the JSON-escaped string a into a JSON-typed value, and then this value is compared to itself. The comparison comes up as not equal. I thought that equality was reflexive (barring tricky things involving NULL and NaN), that is, a value is always equal to itself. What am I misunderstanding?
SELECT
JSON_EXTRACT('"a"', '$'),
JSON_EXTRACT('"a"', '$') =
JSON_EXTRACT('"a"', '$');
Server info:
Server: Localhost via UNIX socket
Server type: MariaDB
Server connection: SSL is not being used Documentation
Server version: 10.6.7-MariaDB-2ubuntu1-log - Ubuntu 22.04
Protocol version: 10
User: phpmyadmin#localhost
Server charset: UTF-8 Unicode (utf8mb4)
Apparently the solution is to use JSON_EQUALS(), which was added in MariaDB 10.7. I don't have an instance of MariaDB 10.7 so I can't test it, and dbfiddle only goes up to MariaDB 10.6.
You can, however, unquote the JSON to extract the string value, and test for equality.
SELECT
JSON_UNQUOTE(JSON_EXTRACT('"a"', '$')) AS a,
JSON_UNQUOTE(JSON_EXTRACT('"a"', '$')) =
JSON_UNQUOTE(JSON_EXTRACT('"a"', '$')) AS `a=a`;
a
a=a
a
1
https://dbfiddle.uk/?rdbms=mariadb_10.6&fiddle=6fa8c7156e6fe9213bfbf44dd57e2c63
Setup:
Corda: 4.6
Tokens SDK: 1.2.2
Problem:
When issuing/moving Confidential Fungible and Non-Fungible tokens using Flows:
ConfidentialIssueTokens()
ConfidentialMoveFungibleTokens()
ConfidentialMoveNonFungibleTokens()
If an Observer is included an error will occur.
When testing with a MockNetwork the following error is reported:
[ERROR] 13:45:18 [Mock network] SqlExceptionHelper. - NULL not allowed
for column "HOLDER"; SQL statement: insert into non_fungible_token
(holder, issuer, token_class, token_identifier, output_index,
transaction_id) values (?, ?, ?, ?, ?, ?) [23502-199]
When running nodes locally using Cordform the following error appears in the Observer's log:
Caused by: org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException:
NULL not allowed for column "HOLDER"; SQL statement: insert into
fungible_token (amount, holder, issuer, holding_key, token_class,
token_identifier, output_index, transaction_id) values (?, ?, ?, ?, ?,
?, ?, ?) [23502-199]
The Observer will not receive the state and a Flow will be entered in their Flow Hospital. Otherwise the transaction seems to be successful. The tokens will be successfully issued/moved to the appropriate Party's vaults.
I observe a strange situation in Windows 10 with XAMPP Control Panel v3.2.4
Terminal window is launched and connected to Mysql DB.
Note: prior in terminal window issued command 'chcp 65001' to support UTF8 encoding.
Now when I attempt to update some table with value which is in Cyrillic then MySQL complains about not closed quote symbol. If I replace Cyrillic input to English then command is accepted.
MariaDB [youtube]> update episodes set name='Катя' where id=11;
'>
If I attempt to insert a new record into DB same situation happens
MariaDB [youtube]> insert into episodes (youtube_id,series_id,season,episode,name) values (12345678904,1,0,1,'Катя');
'>
If double quotes are used situation is the same
MariaDB [youtube]> insert into episodes (youtube_id,series_id,season,episode,title) values (12345678904,1,0,1,"Катя");
">
What a magic touch required to make it work through terminal window?
Update:
John suggested to look into configuration file of MariaDB for UTF8 settings.
The settings was changed to the following and the problem still persists
# The MySQL server
default-character-set=utf8mb4
[mysqld]
init-connect=\'SET NAMES utf8\'
character_set_server=utf8
collation_server=utf8_unicode_ci
skip-character-set-client-handshake
character_sets-dir="C:/bin/XAMPP/App/xampp/mysql/share/charsets"
Initially settings was
# The MySQL server
default-character-set=utf8mb4
[mysqld]
character-set-server=utf8mb4
collation-server=utf8mb4_general_ci
Server status report
MariaDB [youtube]> \s
--------------
mysql Ver 15.1 Distrib 10.4.10-MariaDB, for Win64 (AMD64), source revision c24ec3cece6d8bf70dac7519b6fd397c464f7a82
Connection id: 17
Current database: youtube
Current user: root#localhost
SSL: Not in use
Using delimiter: ;
Server: MariaDB
Server version: 10.4.10-MariaDB mariadb.org binary distribution
Protocol version: 10
Connection: localhost via TCP/IP
Server characterset: utf8
Db characterset: utf8mb4
Client characterset: utf8mb4
Conn. characterset: utf8mb4
TCP port: 3306
Uptime: 11 min 12 sec
Threads: 7 Questions: 59 Slow queries: 0 Opens: 22 Flush tables: 1 Open tables: 16 Queries per second avg: 0.087
--------------
MariaDB Documentation has reference to an option --default-character-set=name.
An attempt to use --default-character-set=utf8mb4 on command line had no effect on behavior of insert/update record in terminal client.
mysql -u root -p --default-character-set=utf8mb4 youtube
....
MariaDB [youtube]> update episodes set title='Катя' where id=11;
'>
I highly recommend getting a copy of the freeware program HeidiSQL. It's not perfect and even crashes occasionally though compared to everything else I've worked with? Oh boy, totally worth my time.
Secondly you want to make sure that you're using the following:
Database Character Set: utf8mb4
Database Collation: utf8mb4_unicode_520_ci
These have the greatest UTF-8 support from what I've read from Stackoverflow member Rick James, via his website. He's a database superstar here on SO so if you ever hire him dump buckets of money on his face. His site has a comparison chart. In fact it's been a while and 520 might have been superseded since I last checked.
To set/change the Database Character Set you will need to change the my.cnf configuration file for MariaDB, I recommend using Notepad++ for code editing. This should make any newly created databases use the correct encoding however you may have to go through and manually update the character sets and collations for databases, tables and table columns so do not forget to be thorough!
[client]
default-character-set = utf8mb4
[mysqld]
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_520_ci
Once you do that your queries with Russian/Greek/etc should work normally as they do with English. Keyword: should.
Since I only speak two languages (English and bad English) I encode all characters past a certain Unicode numeric point just to be certain. It'll take up a bit more space however there are sometimes characters added for languages to Unicode after the majority of the language has been defined thus potentially fragmenting language support. If you're interested comment and I'll go find that code for you.
I'm at a moderate level of comprehension and I'm no Rick James though I have about two or three dozen translation pages (use the search feature and search for 'translation') on the site in my profile if you want to see the output. After I did these things I stopped having the data get corrupted. I hope this helps!
I have installed MariaDB in a Ubuntu 14.04 and trying to run some scripts that the main solution provides (ViciDial). When I try to execute the Sql file, it gives an error in the following CREATE TABLE statement:
CREATE TABLE www_phrases (
phrase_id INT(10) UNSIGNED AUTO_INCREMENT PRIMARY KEY NOT NULL,
phrase_text VARCHAR(10000) default '',
php_filename VARCHAR(255) NOT NULL,
php_directory VARCHAR(255) default '',
source VARCHAR(20) default '',
insert_date DATETIME,
index (phrase_text)
) ENGINE=MyISAM CHARACTER SET utf8 COLLATE utf8_unicode_ci;
The error is:
ERROR 1071 (42000) at line 3348: Specified key was too long; max key length is 1000 bytes
MariaDB status:
MariaDB [DialerDB]> status;
--------------
mysql Ver 15.1 Distrib 10.0.38-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
Connection id: 53
Current database: DialerDB
Current user: root#localhost
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server: MariaDB
Server version: 10.0.38-MariaDB-0ubuntu0.16.04.1 Ubuntu 16.04
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: utf8mb4
Db characterset: utf8mb4
Client characterset: utf8mb4
Conn. characterset: utf8mb4
UNIX socket: /var/run/mysqld/mysqld.sock
Uptime: 1 hour 15 min 49 sec
As far as I understand the limit in MyISAM is 1000, and in newer versions is around 3200, so if the varchar is 10000 this is an error, correct?
But this software is installed correctly if done via installer (an ISO image) and the DB tables are the same...so there must be some config limiting my MariaDB to do this.
Any idea?
If that column has "text" in it, suggest looking into a FULLTEXT index; it will search for words quite efficiently.
A compilation of the limits is here; 10K won't work for a simple index.
(Meanwhile, you should move from MyISAM to InnoDB.)