Notice: unserialize() [function.unserialize] in Drupal - drupal

I have this error appearing above the pages of my Drupal site:
Notice: unserialize() [function.unserialize]: Error at offset 0 of 32
bytes in C:\xampp\htdocs\irbid\includes\bootstrap.inc on line 559
What does this error mean and how can I fix it ?

This is caused by a corrupt entry in the variables table. The value of this table is a serialized php value.
See those for more information on what a serialize value is:
http://php.net/serialize
http://php.net/unserialize
Basically, if one of the value was changed by hand, it can cause something like this.
For example, the default value of the Anonymous variable is:
+-----------+------------------+
| name | value |
+-----------+------------------+
| anonymous | s:9:"Anonymous"; |
+-----------+------------------+
If you change the value to s:9:"Some other value"; then this will cause a problem.
The first character is the type of value. The value s means STRING. Then the colon followed by a number indicate a length. In this case, the word Anonymous is exactly 9 characters. But there is more than 9 characters for Some other value. There are 16 characters in that value, so the correct way would be s:16:"Some other value";.
If someone put the value not serialized (without the s:9:"";) then it would also cause this problem.
I had this very problem in the past. I added some debug code to find out what variable was causing this. I added something like this:
$value = unserialize($variable->value);
if ($value === FALSE) {
watchdog('unserialize', $variable->name);
}
I put this code right before the line causing the error and then I generated the error one more time and I went to the "Recent Log Entries" in Drupal admin http://yoursite.com/admin/reports/dblog and filtered by the type unserialize.
Once I had the name of the variable, I would connect to the database and perform this query:
SELECT * FROM variable WHERE name='name-goes-here';
and I put the name that I found in the logs.
I look at the value and figure out why it is causing this error and then fix the value.
I hope this helps.

My issue was related to UTF-8. String shorter - character-wise (because contained UTF-8) but unserialized expecting longer one.
Solution:
/**
* serialize utf8 values
*
* #param $serial_str
* input sting serialize.
*
* #return (array) $out
* serialize values
*
* #author Mudassar Ali <sahil_bwp#yahoo.com>
*/
function mb_unserialize($serial_str) {
$return = '';
$out = preg_replace('!s:(\d+):"(.*?)";!se', "'s:'.strlen('$2').':\"$2\";'", $serial_str);
$return = unserialize($out);
if ($return === FALSE) {
watchdog('unserialize', $out);
} else {
return $return;
}
}
and
$module->info = mb_unserialize($module->info);
instead of
$module->info = unserialize($module->info);
Make sure your default character set on the server to be UTF8, and default collation to be utf8_general_ci. This is a setting in mysql.ini. Here's the nuclear option.
[mysqld]
default-character-set = utf8
character-set-server = utf8
collation-server = utf8_unicode_ci
init-connect = 'SET NAMES utf8'
[client]
default-character-set = utf8
[mysql]
default-character-set = utf8
[mysqldump]
default-character-set = utf8
and also make sure your DB is also
ALTER DATABASE databasename CHARACTER SET utf8 COLLATE utf8_general_ci;
ALTER TABLE tablename CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;

To find out the potential variables that might be causing the issue, find the one that is broken, and delete it from the database.
SELECT name, LENGTH( value ) , value FROM variable WHERE LENGTH( value ) = "32";
DELETE FROM variable WHERE name = "broken_variable_name";

Why this error is thrown
This error is caused when an entry in Drupal's variables tables isn't in the right format. It often occurs if your host automatically installs your Drupal installation and doesn't do it right, (Like my host likes to do) or if variables haven't been created properly.
Identify the malformed variable
I made a module but you could just write this into a PHP filter input field, like a node or block (obviously, you'd need the core module "PHP Filter" turned on).
This code will output the content of the variables table, so don't do this on a production site:
drupal_set_message(db_query('SELECT name, value FROM
{variable}')->fetchAllKeyed() );
Then you can just go through the list and find the one that is malformed.
How do I know which variable is malformed
Each row is in one of these formats. Each has 2 parameters separated by colons. The 2nd and 3rd fields are values and vary depending on the variable name, as long as they're in approximately one of these formats, they should be ok:
s:16:"this is a string"
s is for String. 16 is the number of characters long the string is. The third parameter is the value in double quotes.
i:10
i is for Integer. 10 is the value of the integer.
b:0
b is for booleen. 0 is the value
a:0:{}
a is for array. 0 is the number of elements and the third parameter is the array. The array may contain any of the above data types (even another array).
The variable that isn't in one of the above formats is malformed.
Fixing the malformed variable
You should be able to isolate the problem and if it's a variable like "site_name" or "site_mail" you can fix this by updating the configuration page where that variable is set (eg.Site Information). If the malformed variable isn't one you recognise:
Put a line of code like this into a module or PHP filter input.
set_variable('the_name_of_the_malformed_variable','the_value_you_think_it_should_be');
Run it once and then remove, your error should be fixed.
Follow the above at your own risk. If you have problems, leave a comment below.

I received this error after (during?) a core update of drupal.
Notice: unserialize(): Error at offset 11 of 35 bytes in variable_initialize() (line936 of /var/www/vhosts/3/101684/webspace/siteapps/Drupal-12836/htdocs/includes/bootstrap.inc).
I installed the variable check module (http://drupal.org/project/variablecheck) which identified the bad value:
update_notify_emails a:1:{i:0;s:26:"user#email.com";}
But this is indicating that the function is expecting an array, not just a string so I couldn't just set a new value with
set_variable('the_name_of_the_malformed_variable','the_value_you_think_it_should_be');
When I checked the value table in the mysql db but the data value was a blob and I couldn't edit it. Not knowing what module set that value and what might break if I simply deleted it I decided to try "re-setting" the array to clear it out.
Google told me that "update_notify_emails" is called in the update module into modules, clicked congfigure for Update Manager
and edited the value for "E-mail addresses to notify when updates are available" (mine was blank). Since the error indicated it was expecting both an int and a string I also flipped the setting on "Only security updates" so that value was passed in as well. Clicked save and error went away.

Related

PLSQL: Find invalid characters in a database column (UTF-8)

I have a text column in a table which I need to validate to recognize which records have non UTF-8 characters.
Below is an example record where there are invalid characters.
text = 'PP632485 - Hala A - prace kuchnia Zepelin, wymiana muszli, monta􀄪 tablic i uchwytów na r􀄊czniki, wymiana zamka systemowego'
There are over 3 million records in this table, so I need to validate them all at once and get the rows where this text column has non UTF-8 characters.
I tried below:
instr(text, chr(26)) > 0 - no records get fetched
text LIKE '%ó%' (tried this for a few invalid characters I noticed) - no records get fetched
update <table> set text = replace(text, 'ó', 'ó') - no change seen in text
Is there anything else I can do?
Appreciate your input.
This is Oracle 11.2
The characters you're seeing might be invalid for your data, but they are valid AL32UTF8 characters. Else they would not be displayed correctly. It's up to you to determine what character set contains the correct set of characters.
For example, to check if a string only contains characters in the US7ASCII character set, use the CONVERT function. Any character that cannot be converted into a valid US7ASCII character will be displayed as ?.
The example below first replaces the question marks with string '~~~~~', then converts and then checks for the existence of a question mark in the converted text.
WITH t (c) AS
(SELECT 'PP632485 - Hala A - prace kuchnia Zepelin, wymiana muszli, monta􀄪 tablic i uchwytów na r􀄊czniki, wymiana zamka systemowego' FROM DUAL UNION ALL
SELECT 'Just a bit of normal text' FROM DUAL UNION ALL
SELECT 'Question mark ?' FROM DUAL),
converted_t (c) AS
(
SELECT
CONVERT(
REPLACE(c,'?','~~~~~')
,'US7ASCII','AL32UTF8')
FROM t
)
SELECT CASE WHEN INSTR(c,'?') > 0 THEN 'Invalid' ELSE 'Valid' END as status, c
FROM converted_t
;
Invalid
PP632485 - Hala A - prace kuchnia Zepelin, wymiana muszli, montao??? tablic i uchwyt??w na ro??Sczniki, wymiana zamka systemowego
Valid
Just a bit of normal text
Valid
Question mark ~~~~~
Again, this is just an example - you might need a less restrictive character set.
--UPDATE--
With your data: it's up to you to determine how you want to continue. Determine what is a good target data set. Contrary to what I set earlier, it's not mandatory to pass a "from dataset" argument in the CONVERT function.
Things you could try:
Check which characters show up as '�' when converting from UTF8 at AL32UTF8
select * from G2178009_2020030114_dinllk
WHERE INSTR(CONVERT(text ,'AL32UTF8','UTF8'),'�') > 0;
Check if the converted text matches the original text. In this example I'm converting to UTF8 and comparing against the original text. If it is different then the converted text will not be the same as the original text.
select * from G2178009_2020030114_dinllk
WHERE
CONVERT(text ,'UTF8') = text;
This should be enough tools for you to diagnose your data issue.
As shown by previous comments, you can detect the issue in place, but it's difficult to automatically correct in place.
I have used https://pypi.org/project/ftfy/ to correct invalidly encoded characters in large files.
It guesses what the actual UTF8 character should be, and there are some controls on how it does this. For you, the problem is that you have to pull the data out, fix it, and put it back in.
So assuming you can get the data out to the file system to fix it, you can locate files with bad encodings with something like this:
find . -type f | xargs -I {} bash -c "iconv -f utf-8 -t utf-16 {} &>/dev/null || echo {}"
This produces a list of files that potentially need to be processed by ftfy.

Add leading zeros to a character variable in progress 4gl

I am trying to import a .csv file to match the records in the database. However, the database records has leading zeros. This is a character field The amount of data is a bit higher side.
Here the length of the field in database is x(15).
The problem I am facing is that the .csv file contains data like example AB123456789 wherein the database field has "00000AB123456789" .
I am importing the .csv to a character variable.
Could someone please let me know what should I do to get the prefix zeros using progress query?
Thank you.
You need to FILL() the input string with "0" in order to pad it to a specific length. You can do that with code similar to this:
define variable inputText as character no-undo format "x(15)".
define variable n as integer no-undo.
input from "input.csv".
repeat:
import inputText.
n = 15 - length( inputText ).
if n > 0 then
inputText = fill( "0", n ) + inputText.
display inputText.
end.
input close.
Substitute your actual field name for inputText and use whatever mechanism you are actually using for importing the CSV data.
FYI - the "length of the field in the database" is NOT "x(15)". That is a display formatting string. The data dictionary has a default format string that was created when the schema was defined but it has absolutely no impact on what is actually stored in the database. ALL Progress data is stored as variable length length. It is not padded to fit the display format and, in fact, it can be "overstuffed" and it is very, very common for applications to do so. This is a source of great frustration to SQL reporting tools that think the display format is some sort of length limit. It is not.

tdstats.UDFCONCAT parameter varchar limit

I was using tdstats.UDFCONCAT to aggregate result of the query,
This function trims the resulting output, upon looked at the definition of this function, it accepts VARCHAR(128) .
REPLACE FUNCTION tdstats.UDFCONCAT
(aVarchar VARCHAR(128) CHARACTER SET UNICODE)
RETURNS VARCHAR(10000) CHARACTER SET UNICODE
CLASS AGGREGATE (20000)
SPECIFIC udfConcat
LANGUAGE C
NO SQL
NO EXTERNAL DATA
PARAMETER STYLE SQL
NOT DETERMINISTIC
CALLED ON NULL INPUT
EXTERNAL NAME 'SL!staudf!F!udf_concatvarchar'
Can anybody tell if there is any specific reason to keep it limited to 128?
Note: I have duplicated the function and increased the size from 128 to 256 and it worked in my case. If anybody wants to know my use case then I can update here but question is regarding the default character limit mentioned by TD in the in-built function, so have not added my use case here.

How can I prevent SQLite from treating a string as a number?

I would like to query an SQLite table that contains directory paths to find all the paths under some hierarchy. Here's an example of the contents of the column:
/alpha/papa/
/alpha/papa/tango/
/alpha/quebec/
/bravo/papa/
/bravo/papa/uniform/
/charlie/quebec/tango/
If I search for everything under /bravo/papa/, I would like to get:
/bravo/papa/
/bravo/papa/uniform/
I am currently trying to do this like so (see below for the long story of why I can't use more simple methods):
SELECT * FROM Files WHERE Path >= '/bravo/papa/' AND Path < '/bravo/papa0';
This works. It looks a bit weird, but it works for this example. '0' is the unicode code point 1 greater than '/'. When ordered lexicographically, all the paths starting with '/bravo/papa/' compare greater than it and less than 'bravo/papa0'. However, in my tests, I find that this breaks down when we try this:
SELECT * FROM Files WHERE Path >= '/' AND Path < '0';
This returns no results, but it should return every row. As far as I can tell, the problem is that SQLite is treating '0' as a number, not a string. If I use '0Z' instead of '0', for example, I do get results, but I introduce a risk of getting false positives. (For example, if there actually was an entry '0'.)
The simple version of my question is: is there some way to get SQLite to treat '0' in such a query as the length-1 string containing the unicode character '0' (which should sort strings such as '!', '*' and '/', but before '1', '=' and 'A') instead of the integer 0 (which SQLite sorts before all strings)?
I think in this case I can actually get away with special-casing a search for everything under '/', since all my entries will always start with '/', but I'd really like to know how to avoid this sort of thing in general, as it's unpleasantly surprising in all the same ways as Javascript's "==" operator.
First approach
A more natural approach would be to use the LIKE or GLOB operator. For example:
SELECT * FROM Files WHERE Path LIKE #prefix || '%';
But I want to support all valid path characters, so I would need to use ESCAPE for the '_' and '%' symbols. Apparently this prevents SQLite from using an index on Path. (See http://www.sqlite.org/optoverview.html#like_opt ) I really want to be able to benefit from an index here, and it sounds like that's impossible using either LIKE or GLOB unless I can guarantee that none of their special characters will occur in the directory name, and POSIX allows anything other than NUL and '/', even GLOB's '*' and '?' characters.
I'm providing this for context. I'm interested in other approaches to solve the underlying problem, but I'd prefer to accept an answer that directly addresses the ambiguity of strings-that-look-like-numbers in SQLite.
Similar questions
How do I prevent sqlite from evaluating a string as a math expression?
In that question, the values weren't quoted. I get these results even when the values are quoted or passed in as parameters.
EDIT - See my answer below. The column was created with the invalid type "STRING", which SQLite treated as NUMERIC.
* Groan *. The column had NUMERIC affinity because it had accidentally been specified as "STRING" instead of "TEXT". Since SQLite didn't recognize the type name, it made it NUMERIC, and because SQLite doesn't enforce column types, everything else worked as expected, except that any time a number-like string is inserted into that column it is converted into a numeric type.

RS-DBI driver warning: (unrecognized MySQL field type 7 in column 1 imported as character)

I'm trying to run a simple query that works with MySQL or other MySQL connector API's,
SELECT * FROM `table` WHERE type = 'farmer'
I've tried various methods using the RMySQL package and they all get the same error
RS-DBI driver warning: (unrecognized MySQL field type 7 in column 1 imported as character)
Type = 'farmer'
(Query<-paste0("SELECT * FROM `table` WHERE type = '%",Type,"%'"))
res<-dbGetQuery(con, Query)
Query<-paste("SELECT * FROM `table` WHERE type = \'farmer\'")
Query<-paste("SELECT * FROM `table` WHERE type = 'farmer'")
What am I doing wrong?
"type" is a keyword in MYSQL. Surround the it with backticks to escape field names.
SELECT * FROM `table` WHERE `type` = 'farmer'
Also you probably have a time stamp column in your table. R is known to not recognize that column type. Convert it to a unix time stamp in the portion of the SQL statement.
Looks like the db schema has something in column which is of type 7 -- and that type appears to be unknown to the RMySQL driver.
I try to exclude column one in the query, or cast it at the select * ... level eg via something like
select foo as character, bar, bim, bom from 'table' where ...
To be clear, when I encountered this error message, it was because my data field was a time stamp.
I verified this by changing my query to SELECT created_at FROM ... which caused the error. I also verified this by changing the query not to include the column names that were timestamps, then I had no errors.
Note too, that the error message counts columns starting from 0 (instead of 1 as R does)
IMHO, the answer is you aren't doing anything wrong, but it's something that needs to be fixed in RMySQL.
The workaround is after you read in your data, you need to call one of the several possible character to datetime conversion functions. (Which one depends on what you want to do with the time stamp exactly.)

Resources