RS-DBI driver warning: (unrecognized MySQL field type 7 in column 1 imported as character) - r

I'm trying to run a simple query that works with MySQL or other MySQL connector API's,
SELECT * FROM `table` WHERE type = 'farmer'
I've tried various methods using the RMySQL package and they all get the same error
RS-DBI driver warning: (unrecognized MySQL field type 7 in column 1 imported as character)
Type = 'farmer'
(Query<-paste0("SELECT * FROM `table` WHERE type = '%",Type,"%'"))
res<-dbGetQuery(con, Query)
Query<-paste("SELECT * FROM `table` WHERE type = \'farmer\'")
Query<-paste("SELECT * FROM `table` WHERE type = 'farmer'")
What am I doing wrong?

"type" is a keyword in MYSQL. Surround the it with backticks to escape field names.
SELECT * FROM `table` WHERE `type` = 'farmer'
Also you probably have a time stamp column in your table. R is known to not recognize that column type. Convert it to a unix time stamp in the portion of the SQL statement.

Looks like the db schema has something in column which is of type 7 -- and that type appears to be unknown to the RMySQL driver.
I try to exclude column one in the query, or cast it at the select * ... level eg via something like
select foo as character, bar, bim, bom from 'table' where ...

To be clear, when I encountered this error message, it was because my data field was a time stamp.
I verified this by changing my query to SELECT created_at FROM ... which caused the error. I also verified this by changing the query not to include the column names that were timestamps, then I had no errors.
Note too, that the error message counts columns starting from 0 (instead of 1 as R does)
IMHO, the answer is you aren't doing anything wrong, but it's something that needs to be fixed in RMySQL.
The workaround is after you read in your data, you need to call one of the several possible character to datetime conversion functions. (Which one depends on what you want to do with the time stamp exactly.)

Related

Can not create varchar column in mariadb

This code does not run:
alter table States
add Key varchar(400) character set utf8 not null
The error:
Error in query (1064): Syntax error near 'varchar(400) character set utf8 not null' at line 2
What is wrong in this code?
Key is a reserved keyword in MySql (hence in MariaDB too), so to use it as an identifier you must enclose it in backticks. This will do:
alter table `States` add `Key` varchar(400) character set utf8 not null
I've also enclosed States in backticks, just for the sake of consistency, even though it's not really required.
An option would be to use another name for the new column, so to avoid the same problem in queries involving it afterwards.

loading in a MySQL table called "order" with RMySQL

I'm currently trying to connect my R session to a MySQL server using the RMySQL package.
One of the tables on the server is called "order", I already searched how you can import a table called order with MySQL (by putting it into ''), yet the syntax does not work for the RMySQL query.
when I run the following statement:
order_query = dbSendQuery(mydb,"SELECT * FROM 'order'")
It returns the following error:
Error in .local(conn, statement, ...) : could not run statement:
You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near ''order'' at line 1
Anyone knows how to get around this in R?
Single quotes in MySQL indicate string literals, and you should not be putting them around your table names. Try the query without the quotes:
order_query = dbSendQuery(mydb,"SELECT * FROM `order`")
If you did, for some reason, need to escape your table name, then use backticks, e.g.
SELECT * FROM `some table` -- table name with a space (generally a bad thing)
Edit:
As #Ralf pointed out, in this case you do need backticks because ORDER is a MySQL keyword and you should not be using it to name your tables and columns.

teradata : to calulate cast as length of column

I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.

PostgreSQL, R and timestamps with no time zone

I am reading a big csv (>1GB big for me!). It contains a timestamp field.
I read it (100 rows to start with ) with fread from the excellent data.table package.
ddfr <- fread(input="~/file1.csv",nrows=100, header=T)
Problem 1 (RESOLVED): the timestamp fields (called "ts" and "update"), e.g. "02/12/2014 04:40:00 AM" is converted to string. I convert the fields back to timestamp with lubridate package mdh_hms. Splendid.
ddfr$ts <- data.frame( mdy_hms(ddfr$ts))
Problem 2 (NOT RESOLVED): The timestamp is created with time zone as per POSIXlt.
How do I create in R a timestamp with NO TIME ZONE? is it possible??
Now I use another (new) great package, PivotalR to write the dataframe to PostGreSQL 9.3 using as.db.data.frame. It works as a charm.
x <- as.db.data.frame(ddfr, table.name= "tbl1", conn.id = 1)
Problem 3 (NOT RESOLVED): As the original dataframe timestamp fields had time zones, a table is created with the fields "timestamp with time zone". Ultimately the data needs to be stored in a table with fields configured as "timestamp without time zone".
But in my table in Postgres the data is stored as "2014-02-12 04:40:00.0", where the .0 at the end is the UTC offset. I think I need to have "2014-02-12 04:40:00".
I tried
ALTER TABLE tbl ALTER COLUMN ts type timestamp without time zone;
Then I copied across. While Postgres accepts the ALTER COLUMN command, when I try to copy (using INSERT INTO tbls SELECT ...) I get an error:
"column "ts" is of type timestamp without time zone but expression is of type text
Hint: You will need to rewrite or cast the expression."
Clearly the .0 at the end is not liked (but why then Postgres accepts the ALTER COLUMN? boh!).
I tried to do what the error suggested using CAST in the INSERT INTO query:
INSERT INTO tbl2 SELECT CAST(ts as timestamp without time zone) FROM tbl1
But I get the same error (including the suggestion to use CAST aargh!)
The table directly created by PivotalR (based on the dataframe) has this CREATE script:
CREATE TABLE tbl2
(
businessid integer,
caseno text,
ts timestamp with time zone
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO mydb;
The table I'm inserting into has this CREATE script:
CREATE TABLE tbl1
(
id integer NOT NULL DEFAULT nextval('bus_seq'::regclass),
businessid character varying,
caseno character varying,
ts timestamp without time zone,
updated timestamp without time zone,
CONSTRAINT busid_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO postgres;
My apologies for the convoluted explanation, but potentially a solution could be found at any step in the chain, so I preferred to put all my steps in one question. I am sure there has to be a simpler method...
I think you're confused about copying data between tables.
INSERT INTO ... SELECT without a column list expects the columns from source and destination to be the same. It doesn't magically match up columns by name, it'll just assign columns from the SELECT to the INSERT from left to right until it runs out of columns, at which point any remaining cols are assumed to be null. So your query:
INSERT INTO tbl2 SELECT ts FROM tbl1;
isn't doing this:
INSERT INTO tbl2(ts) SELECT ts FROM tbl1;
it's actually picking the first column of tbl2, which is businessid, so it's really attempting to do:
INSERT INTO tbl2(businessid) SELECT ts FROM tbl1;
which is clearly nonsense, and no casting will fix that.
(Your error in the original question doesn't match your tables and queries, so the details might be different as you've clearly made a mistake in mangling/obfuscating your tables or posted a newer version of the tables than the error. The principle remains.)
It's generally a really bad idea to assume your table definitions won't change and column order won't change anyway. So always be explicit about columns. In this case I think your intention might have actually been:
INSERT INTO tbl2(businessid, caseno, ts)
SELECT CAST(businessid AS integer), caseno, ts
FROM tbl1;
Note the cast, because the type of businessid is different between the two tables.

Notice: unserialize() [function.unserialize] in Drupal

I have this error appearing above the pages of my Drupal site:
Notice: unserialize() [function.unserialize]: Error at offset 0 of 32
bytes in C:\xampp\htdocs\irbid\includes\bootstrap.inc on line 559
What does this error mean and how can I fix it ?
This is caused by a corrupt entry in the variables table. The value of this table is a serialized php value.
See those for more information on what a serialize value is:
http://php.net/serialize
http://php.net/unserialize
Basically, if one of the value was changed by hand, it can cause something like this.
For example, the default value of the Anonymous variable is:
+-----------+------------------+
| name | value |
+-----------+------------------+
| anonymous | s:9:"Anonymous"; |
+-----------+------------------+
If you change the value to s:9:"Some other value"; then this will cause a problem.
The first character is the type of value. The value s means STRING. Then the colon followed by a number indicate a length. In this case, the word Anonymous is exactly 9 characters. But there is more than 9 characters for Some other value. There are 16 characters in that value, so the correct way would be s:16:"Some other value";.
If someone put the value not serialized (without the s:9:"";) then it would also cause this problem.
I had this very problem in the past. I added some debug code to find out what variable was causing this. I added something like this:
$value = unserialize($variable->value);
if ($value === FALSE) {
watchdog('unserialize', $variable->name);
}
I put this code right before the line causing the error and then I generated the error one more time and I went to the "Recent Log Entries" in Drupal admin http://yoursite.com/admin/reports/dblog and filtered by the type unserialize.
Once I had the name of the variable, I would connect to the database and perform this query:
SELECT * FROM variable WHERE name='name-goes-here';
and I put the name that I found in the logs.
I look at the value and figure out why it is causing this error and then fix the value.
I hope this helps.
My issue was related to UTF-8. String shorter - character-wise (because contained UTF-8) but unserialized expecting longer one.
Solution:
/**
* serialize utf8 values
*
* #param $serial_str
* input sting serialize.
*
* #return (array) $out
* serialize values
*
* #author Mudassar Ali <sahil_bwp#yahoo.com>
*/
function mb_unserialize($serial_str) {
$return = '';
$out = preg_replace('!s:(\d+):"(.*?)";!se', "'s:'.strlen('$2').':\"$2\";'", $serial_str);
$return = unserialize($out);
if ($return === FALSE) {
watchdog('unserialize', $out);
} else {
return $return;
}
}
and
$module->info = mb_unserialize($module->info);
instead of
$module->info = unserialize($module->info);
Make sure your default character set on the server to be UTF8, and default collation to be utf8_general_ci. This is a setting in mysql.ini. Here's the nuclear option.
[mysqld]
default-character-set = utf8
character-set-server = utf8
collation-server = utf8_unicode_ci
init-connect = 'SET NAMES utf8'
[client]
default-character-set = utf8
[mysql]
default-character-set = utf8
[mysqldump]
default-character-set = utf8
and also make sure your DB is also
ALTER DATABASE databasename CHARACTER SET utf8 COLLATE utf8_general_ci;
ALTER TABLE tablename CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;
To find out the potential variables that might be causing the issue, find the one that is broken, and delete it from the database.
SELECT name, LENGTH( value ) , value FROM variable WHERE LENGTH( value ) = "32";
DELETE FROM variable WHERE name = "broken_variable_name";
Why this error is thrown
This error is caused when an entry in Drupal's variables tables isn't in the right format. It often occurs if your host automatically installs your Drupal installation and doesn't do it right, (Like my host likes to do) or if variables haven't been created properly.
Identify the malformed variable
I made a module but you could just write this into a PHP filter input field, like a node or block (obviously, you'd need the core module "PHP Filter" turned on).
This code will output the content of the variables table, so don't do this on a production site:
drupal_set_message(db_query('SELECT name, value FROM
{variable}')->fetchAllKeyed() );
Then you can just go through the list and find the one that is malformed.
How do I know which variable is malformed
Each row is in one of these formats. Each has 2 parameters separated by colons. The 2nd and 3rd fields are values and vary depending on the variable name, as long as they're in approximately one of these formats, they should be ok:
s:16:"this is a string"
s is for String. 16 is the number of characters long the string is. The third parameter is the value in double quotes.
i:10
i is for Integer. 10 is the value of the integer.
b:0
b is for booleen. 0 is the value
a:0:{}
a is for array. 0 is the number of elements and the third parameter is the array. The array may contain any of the above data types (even another array).
The variable that isn't in one of the above formats is malformed.
Fixing the malformed variable
You should be able to isolate the problem and if it's a variable like "site_name" or "site_mail" you can fix this by updating the configuration page where that variable is set (eg.Site Information). If the malformed variable isn't one you recognise:
Put a line of code like this into a module or PHP filter input.
set_variable('the_name_of_the_malformed_variable','the_value_you_think_it_should_be');
Run it once and then remove, your error should be fixed.
Follow the above at your own risk. If you have problems, leave a comment below.
I received this error after (during?) a core update of drupal.
Notice: unserialize(): Error at offset 11 of 35 bytes in variable_initialize() (line936 of /var/www/vhosts/3/101684/webspace/siteapps/Drupal-12836/htdocs/includes/bootstrap.inc).
I installed the variable check module (http://drupal.org/project/variablecheck) which identified the bad value:
update_notify_emails a:1:{i:0;s:26:"user#email.com";}
But this is indicating that the function is expecting an array, not just a string so I couldn't just set a new value with
set_variable('the_name_of_the_malformed_variable','the_value_you_think_it_should_be');
When I checked the value table in the mysql db but the data value was a blob and I couldn't edit it. Not knowing what module set that value and what might break if I simply deleted it I decided to try "re-setting" the array to clear it out.
Google told me that "update_notify_emails" is called in the update module into modules, clicked congfigure for Update Manager
and edited the value for "E-mail addresses to notify when updates are available" (mine was blank). Since the error indicated it was expecting both an int and a string I also flipped the setting on "Only security updates" so that value was passed in as well. Clicked save and error went away.

Resources