I'm developing an app that requires the storage of Portuguese characters. I was wondering if I need to do any configuration to prepare my SQLite db to store those considered special characters. When I query a db table that contains those characters I get a '?' (without quotes) in their place.
Probably an encoding problem. Is your DB/client using UTF-8?
you should check your DB encoding with PRAGMA encoding;, be sure your client does it's job using the same encoding and verify that the encoding used handles well those Portuguese chars.
Related
What are the allowed characters for the passphrase? Are all unicode valid?
(SQLite documentation search gives back no results for "password")
For length I read here that it is considering only the first 16 characters of the password.
I found that Meteor default use sha-256 to hash password. but I am confused that same password for each account after hashing become different string stored in the database. Anyone would tell the detail implementation, thx
Per the Meteor docs, accounts-password uses bcrypt.
If you look at the source code of loginWithPassword, you should be able to find out where the salt is stored. As a second source, read MasterAM's answer to Laravel & Meteor password hashing which indicates that Meteor from 2011 on uses $2y$ hash strings, i.e. PHP CRYPT_BLOWFISH, which uses
CRYPT_BLOWFISH - Blowfish hashing with a salt as follows: "$2a$", "$2x$" or "$2y$", a two digit cost parameter, "$", and 22 characters from the alphabet "./0-9A-Za-z". Using characters outside of this range in the salt will cause crypt() to return a zero-length string. The two digit cost parameter is the base-2 logarithm of the iteration count for the underlying Blowfish-based hashing algorithmeter and must be in range 04-31, values outside this range will cause crypt() to fail. Versions of PHP before 5.3.7 only support "$2a$" as the salt prefix: PHP 5.3.7 introduced the new prefixes to fix a security weakness in the Blowfish implementation. Please refer to ยป this document for full details of the security fix, but to summarise, developers targeting only PHP 5.3.7 and later should use "$2y$" in preference to "$2a$".
Thus, look for the $2y$ string in the database, and extract the salt from it.
I'm using ADO connection and ODBC driver to read DBF file:
Driver={Microsoft dBASE Driver (*.dbf)};DriverID=277;Extended Properties=dBase IV;
How can be string fields fetched without chars convertion (according to some codepage)? I mean, is there any way to read strings just like array of bytes.
Perhaps, some Property of ADOConnection or connection string advancing affects the behaviour of strings reading.
P.S.: Any dbf-file modifications are not acceptable.
I've already tried to advance connection string with the following parameters: "AutoTranslate=no;"; "CCSID=65535;". But it did not work. I still have characteres translation corresponding to some codepage
One more interesting moment. If to connect via OLE DB provider
Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=dBASE IV;
then chars translation is skipped.
But this method is slow and has some disadventages, therefore it doesn't fit well.
Currently, I have not found solution except using Jet OLE DB provider in case of incorrect strings autotranslation and MSDASQL provider as a common one for general purpose...
Specifically, what character encoding does SQLDataSources use?
On my Windows 7 machine (set to New Zealand English) it seems to use CP1252. I can't find any mention of character encodings in the documentation.
It depends of database you use. For PostgreSQL I use SET client_encoding to <encoding>; after connecting do database. For Informix there is Client Encoding option available on Environment tab. For Oracle I use NLS_LANG environment setting.
I've done some experimentation and determined that data source names are in unicode. SQLDataSources gives you the name converted to the system code page, replacing characters that can't be converted with '?'. This is about as useful as you might expect. The undocumented function SQLDataSourcesW gives the name encoded in UTF-16.
On a modern Unix or Linux system, how can you tell which code set the /etc/passwd file stores user names in? Are user names allowed to contain accented characters (from the range 0x80..0xFF in, say, ISO 8859-1 or 8859-15)? Can the /etc/passwd file contain UTF-8? Can you tell that it contains UTF-8? What about the plain text of passwords before they are encrypted or hashed?
Clearly, if the usernames and other data is limited to the 0x00..0x7F range (and excludes 0x00 anyway), then there is no difference between UTF-8, 8859-1 or 8859-15; the characters present are all encoded the same.
Also, I'm using /etc/passwd as an abbreviation for something along the lines of "the user identification and authentication database (sometimes termed a directory service) on a Unix-based machine, usually accessed via PAM and sometimes hosted on other machines altogether from the local one, but sometimes still actually a file on the local hard disk, conventionally called /etc/passwd, often supported by /etc/shadow". I'm also assuming that the equivalent questions about the group database (often the /etc/group file) have the same answer.
It's all ASCII. But the password itself is never stored - only the results of the one-way hash. If you're wondering what characters can be in the password itself, it depends on the locale, which will restrict the characters your terminal is able to deal with. See "man locale"
From the BSD man page:
"/etc/passwd ASCII password file..."
As for usernames, I can tell you that Solaris only supports ASCII. I can't speak for other Unix-en.
"Not every object in Solaris 2 and Solaris 7can have names composed of arbitrary characters. The names of the following objects must be composed of ASCII characters:
* User names, group name, and passwords
* System name ...
"