Postgres: org.postgresql.util.PSQLException: ERROR: insufficient data left in message - postgresql-9.1

I read enough to know that this occurs when a string contains some characters that Postgres doesn't like. However, I cannot figure out if there is a way to validate strings before writing them. In particular, I'm doing batch inserts.
insert into foo(col1,col2,col3) values ('a',2,3),('b',4,0),....
My DB is setup like this:
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+--------+----------+---------+-------+-------------------
stats | me | UTF8 | C | C |
Periodically, some bad string will get in and the whole insert will fail(e.g. change���=). I batch up quite a few values in a single insert so I'd like to ideally validate the string rather than bomb the whole insert. Is there a list of which characters are not allowed in a Postgres insert?
Using postgresql-jdbc 9.1-901.jdbc4

This message means that your string data has a null character "\0" in it.
I can't find an authoritative cite for this (let me know if you have one).
It is discussed at https://www.postgresql.org/message-id/alpine.BSO.2.00.0906031639270.2432%40leary.csoft.net
It is mentioned in passing in the official docs at https://www.postgresql.org/docs/9.3/static/functions-string.html
All other characters are allowed.

date type is not match target type.for example int4->int8

On my case, I was loading query from an sql file. The problem was due to the encoding.
I change it to UTF-8 and it works. Hope that helps !

Related

How to update an existing tables datetime column with current datetime in sqlite

I have an existing sqlite database with a table in it something like this:
+------+----------+--------------------+----------------+
|LogID | UsedOn | UserID | Other fields() |
+------+----------+--------------------+----------------+
| 1 | | soemid03 | SomeDataHere |
+------+----------+--------------------+----------------+
Etc....
The UsedOn field is currently blank, because when I made the table I accidently forgot to set the field type to a timestamp type, so the application was just inserting the other colums and leaving this one blank.
Because I would like to use a comparison at some point later using the timestamp, I would like to update this column for all rows in the table with the current timestamp, I assume I can use datetime() in sqlite to do this. It does not matter too much that some of the dates and times will be out by a few days, but the field cannot be empty or my comparison code would not work.
I tried using:
UPDATE tablename SET UsedOn=datetime()
And this was accepted as a valid query, but it seems to do nothing, this column is still empty.
perhaps I'm doing this wrong in some way?
I can only edit the database/table via either manual queries or by using 'SQLite Administrator' app (from http://sqliteadmin.orbmu2k.de/). I can't use anything else because that is what is available and I'm not allowed to install any other database management tools. When I try to edit any row in the table to add a datetime manually, it does not get accepted, but I just assume this is because the app is trying to insert what I type as a string (even though the format is correct) and it's not a string field type.
I tried your code in SQLite Administrator and it does not work, while it should.
What does work is:
UPDATE tablename
SET UsedOn = CURRENT_TIMESTAMP
This does not mean that your code is wrong.
If you use any other tool like DB Browser for SQLite, both solutions would work.

How to extract a Teradata .TPT file with UTF-8 encoding

We are currently extracting several Teradata .TPT files that we will upload to AWS S3, however the files are coming with ANSI encode
I need them to come with encode UTF-8
You must specify the character set in your TPT script. At the top add:
USING CHARACTER SET UTF8
The tricky part is that UTF8 here has 3 bytes per character, so in your DEFINE SCHEMA you must triple the size of each field.
For example if your schema looks like:
DEFINE SCHEMA s_some_export
(
status VARCHAR(20),
userid VARCHAR(20),
firstname VARCHAR(64),
);
You'll have to triple the values to accommodate your UTF8 characters:
DEFINE SCHEMA s_some_export
(
status VARCHAR(60),
userid VARCHAR(60),
firstname VARCHAR(192),
);
Sometimes, because I'm lazy, I define my TPT with USING CHARACTER SET UTF16 so that I only need double each field size (the math is easier). BUT it means I have to convert it to UTF8 after extraction. In Linux this would just be iconv -f UTF-16LE -t UTF-8 myoutputfile.csv > myoutputfile.utf8.csv
Some caveats:
If your table's field is defined as CHAR and CHARACTER SET LATIN then you may run into column size issues with your schema. see here
Dates and Timestamps can get wierd as they don't need to be doubled so defining them as VARCHAR in your schema can get you into trouble. You may have to fuss around a bit here. My suggestion would be to change the view from which you are selecting the data for you TPT and CAST(yourdate AS VARCHAR(10)) as yourdate and then use VARCHAR(30) in your schema so you don't have to think about the field types while defining your schema. This means extra CPU overhead in your extraction, but unless you are running tight on resources I think it's worth it. I'm also very lazy that way and always happy to just get the damned TPT to extract data without much debugging.

What is the meaning of INDEX_COMMENT from the STATISTICS table?

MariaDB provides the INFORMATION_SCHEMA.STATISTICS table. After COMMENT there is a column INDEX_COMMENT, but the meaning is currently undocumented on their site.
Does anybody know the purpose of INDEX_COMMENT?
From https://mariadb.com/kb/en/library/create-index/
index_option:
KEY_BLOCK_SIZE [=] value
| index_type
| WITH PARSER parser_name
| COMMENT 'string' -- It probably comes from here
That existed at least as far back as MySQL 5.5.

Dump SQLite table/DB without auto-generated columns (e.g id)

I have an SQLite 3 database (db) like in this simplified example:
CREATE TABLE user(id integer primary key, name text);
INSERT INTO "user" VALUES(1,'user1');
INSERT INTO "user" VALUES(2,'user2');
If I enter .dump, sqlite will wrap these statements in a transaction and write them to the file db.sql previously defined with .output. This is fine if I need to import the data to an empty DB.
I want to be able to import the user data to a different DB with other users defined. If I try, I will most likely get something like this (as the ids may be used already in the target DB):
Error: near line 4: PRIMARY KEY must be unique
Error: near line 5: PRIMARY KEY must be unique
My approaches:
I can tweak the dumped SQL manually to remove everything but the inserts and also remove the id column (mentioning only name) in the statement, but this approach does not scale as I want to automate the process.
I can select from the db and write the SQL myself.
Is there any easy or more elegant approach that I am missing? sqlite3 will be run from a Bash script in the real world problem.
You can automate the tweaking (but this still requires that you know the table structure):
$ (echo ".mode insert"; echo "SELECT name FROM user;") | \
sqlite3 my.db | \
sed -e 's/^INSERT INTO table /INSERT INTO user(name) /'
INSERT INTO user(name) VALUES('user1');
INSERT INTO user(name) VALUES('user2');

.schema for postgres

I'm migrating a database from sqlite3 to postgres and am wondering if there are any short tutorials that can teach me the new syntax.
Also, as a short term question, how do I see the schema of a postgres table which is equivalent to .schema in sqlite?
You could use pg_dump command line utility, i.e.:
pg_dump --table <table_name> --schema-only <database_name>
Depending on your environment you probably need to specify connection options (-h, -p, -U switches).
You could use \d from within psql:
=> \?
...
Informational
(options: S = show system objects, + = additional detail)
\d[S+] list tables, views, and sequences
\d[S+] NAME describe table, view, sequence, or index
...
=> \d people
Table "public.people"
Column | Type | Modifiers
------------------------+-----------------------------+-----------------------------------------------------
id | integer | not null default nextval('people_id_seq'::regclass)
created_at | timestamp without time zone | not null
updated_at | timestamp without time zone | not null
...
Indexes:
"people_pkey" PRIMARY KEY, btree (id)
...
Check constraints:
"chk_people_latlng" CHECK ((lat IS NULL) = (lng IS NULL))
....
You can also root around in the information_schema if you're not inside psql.
If you are using psql (and \d... ) then you can
\set ECHO_HIDDEN
to see the sql for the queries that psql is executing to put together the \d... output-- this is useful not only as sql syntax examples but it also shows you where find, and how to connect, the database metadata.
To get the schema name for a table you can:
SELECT n.nspname AS schema_name,
c.relname AS table_name
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname = '<table_name>'
;
(don't know how that compares to .schema)
Maybe you can use a PostgreSQL Cheat Sheet:
http://www.postgresonline.com/special_feature.php?sf_name=postgresql83_cheatsheet&outputformat=html

Resources