I want to convert Persian numbers to English numbers using QLocale, i wrote this code but it fail:
int main(void)
{
QLocale english_number(QLocale::Language::English, QLocale::Country::UnitedStates);
QTime time;
time = english_number.toTime("۱۲:۳۲", "HH:mm");
qDebug() << time;
}
Console output:
QTime(Invalid)
But it's possible to convert English numbers to Persian numbers:
QLocale persian_number(QLocale::Language::Persian, QLocale::Country::Iran);
time = persian_number.toTime("13:32", "HH:mm");
qDebug() << time;
Console Output:
QTime("13:32:00.000")
Where did i go wrong ?
Qt: 5.14.1
OS: Archlinux-5.6.7-arch1-1
Compiler: GCC 9.3
I think it's a Qt Bug, for doing this before converting to QTime we need to convert Persian numbers to int and then convert to QTime.
Related
In my Qt GUI app I use libwebrtc. From one on callbacks I want to emit signal with data, which is std::string. As core app logic use QString, I want convert std::string to Qstring.
I try following:
QString::fromStdString(stdstr)
QString::fromLatin1(stdstr.data())
Both of this return broken text, something like this
Only working way for me was
QString qstr;
for(uint i =0; i< stdstr.length(); i++)
qstr.append(stdstr.at(i));
Here my thoughts about reason of problem:
Encoding problems.
Binary problems
About encoding, libwebrtc should return std::string in UTF-8 by default, same as QString.
About binary. As I understand Qt built with GCC and corresponding stdlib, but libwebrtc build with CLang and libc++. For building I also specify usinng QMAKE_CXXFLAGS += -stdlib=libc++
What is correct way to convert types here?
UPD
I compare length of converted string and source string, they are very different.
For std::string I get 64, and for converted QString I get 5.
UPD2
Here is full function and corresponding slot for SPDGen signal
void foo(const webrtc::IceCandidateInterface *candidate) override {
std::string str;
candidate->ToString(&str);
QString qstr = QString::fromStdString(str);
qDebug() << qstr;
Q_EMIT SPDGen(qstr);
}
connect(conductor, &Conductor::SPDGen, this, [=](QString value){
ui->textEdit->setText(value);
});
I have a problem with a calculator in flex and bison
in this code
0[xX][0-9a-fA-F]+ {yylval=strtol(yytext,0 ,16);return HEX;}
it actually recognize hexadecimal values,and do the math operations but when i want to print the result it print it in decimal.
so i think its a problem wiht this part:
/*main(int argc, char **argv)
{
int tok;
while(tok = yylex()) {
printf("%d", tok);
if(tok == NUMBER) printf(" = %d\n", yylval);
else
if(tok == HEX)
printf(" = %x\n", yylval);
else
printf("\n");
}
}*/
strtol used in the way you use it, converts ASCII1 hex into binary integer form. As an int or long you can then do operations on it (adding, subtracting, multiplying, etc), which you can't do on numbers in ASCII (string) form. To actually display a number, you need to convert it back into ASCII, which is what printf does. With the %d conversion it converts to ASCII decimal -- if you want ASCII hex, use %x
1Strictly speaking it might be any character set, not necessarily ASCII, but pretty much all C compilers use ASCII or some extension of it for their basic character set
I'm trying to get into the fascinating world of Common Lisp embedded in C++. My problem is that I can't manage to read and print from c++ a string returned by a lisp function defined in ECL.
In C++ I have this function to run arbitrary Lisp expressions:
cl_object lisp(const std::string & call) {
return cl_safe_eval(c_string_to_object(call.c_str()), Cnil, Cnil);
}
I can do it with a number in this way:
ECL:
(defun return-a-number () 5.2)
read and print in C++:
auto x = ecl_to_float(lisp("(return-a-number)"));
std::cout << "The number is " << x << std::endl;
Everything is set and works fine, but I don't know to do it with a string instead of a number. This is what I have tried:
ECL:
(defun return-a-string () "Hello")
C++:
cl_object y = lisp("(return-a-string)");
std::cout << "A string: " << y << std::endl;
And the result of printing the string is this:
A string: 0x3188b00
that I guess is the address of the string.
Here it is a capture of the debugger and the contents of the y cl_object. y->string.self type is an ecl_character.
Debug
(Starting from #coredump's answer that the string.self field provides the result.)
The string.self field is defined as type ecl_character* (ecl/object.h), which appears to be given in ecl/config.h as type int (although I suspect this is slightly platform dependent). Therefore, you will not be able to just print it as if it was a character array.
The way I found worked for me was to reinterpret it as a wchar_t (i.e. a unicode character). Unfortunately, I'm reasonably sure this isn't portable and depends both on how ecl is configured and the C++ compiler.
// basic check that this should work
static_assert(sizeof(ecl_character)==sizeof(wchar_t),"sizes must be the same");
std::wcout << "A string: " << reinterpret_cast<wchar_t*>(y->string.self) << std::endl;
// prints hello, as required
// note the use of wcout
The alternative is to use the lisp type base-string which does use char (base-char in lisp) as its character type. The lisp code then reads
(defun return-a-base-string ()
(coerce "Hello" 'base-string))
(there may be more elegant ways to do the conversion to base-string but I don't know them).
To print in C++
cl_object y2 = lisp("(return-a-base-string)");
std::cout << "Another: " << y2->base_string.self << std::endl;
(note that you can't mix wcout and cout in the same program)
According to section 2.6 Strings of The ECL Manual, I think that the actual character array is found by accessing the string.self field of the returned object. Can you try the following?
std::cout << y->string.self << std::endl;
std::string str {""};
cl_object y2 = lisp("(return-a-base-string)");
//get dimension
int j = y2->string.dim;
//get pointer
ecl_character* selv = y2->string.self;
//do simple pointer addition
for(int i=0;i<j;i++){
str += (*(selv+i));
}
//do whatever you want to str
this code works when the string is build from ecl_characters
from the documentation:
"ECL defines two C types to hold its characters: ecl_base_char and ecl_character.
When ECL is built without Unicode, they both coincide and typically match unsigned char, to cover the 256 codes that are needed.
When ECL is built with Unicode, the two types are no longer equivalent, with ecl_character being larger.
For your code to be portable and future proof, use both types to really express what you intend to do."
On my system the return-a-base-string is not needed, but I think it could be good to add for compatibility. I use the (ecl) embedded CLISP 16.1.2 version.
The following piece of code reads a string from lisp and converts to C++ strings types - std::string and c-string- and store them on C++ variables:
// strings initializations: string and c-string
std::string str2 {""};
char str_c[99] = " ";
// text read from clisp, whatever clisp function that returns string type
cl_object cl_text = lisp("(coerce (text-from-lisp X) 'base-string)");
//cl_object cl_text = lisp("(text-from-lisp X)"); // no base string conversions
// catch dimension
int cl_text_dim = cl_text->string.dim;
// complete c-string char by char
for(int ind=0;i<cl_text_dim;i++){
str_c[i] = ecl_char(cl_text,i); // ecl function to get char from cl_object
}
str_c[cl_text_dim] ='\0'; // end of the c-string
str2 = str_c; // get the string on the other string type
std::cout << "Dim: " << cl_ text_dim << " C-String var: " << str_c() << " String var << str2 << std::endl;
It is a slow process as passing char by char but it is the only way by the moment I know. Hope it helps. Greetings!
I know there is plenty of information about converting QString to char*, but I still need some clarification in this question.
Qt provides QTextCodecs to convert QString (which internally stores characters in unicode) to QByteArray, allowing me to retrieve char* which represents the string in some non-unicode encoding. But what should I do when I want to get a unicode QByteArray?
QTextCodec* codec = QTextCodec::codecForName("UTF-8");
QString qstr = codec->toUnicode("Юникод");
std::string stdstr(reinterpret_cast<const char*>(qstr.constData()), qstr.size() * 2 ); // * 2 since unicode character is twice longer than char
qDebug() << QString(reinterpret_cast<const QChar*>(stdstr.c_str()), stdstr.size() / 2); // same
The above code prints "Юникод" as I've expected. But I'd like to know if that is the right way to get to the unicode char* of the QString. In particular, reinterpret_casts and size arithmetics in this technique looks pretty ugly.
The below applies to Qt 5. Qt 4's behavior was different and, in practice, broken.
You need to choose:
Whether you want the 8-bit wide std::string or 16-bit wide std::wstring, or some other type.
What encoding is desired in your target string?
Internally, QString stores UTF-16 encoded data, so any Unicode code point may be represented in one or two QChars.
Common cases:
Locally encoded 8-bit std::string (as in: system locale):
std::string(str.toLocal8Bit().constData())
UTF-8 encoded 8-bit std::string:
str.toStdString()
This is equivalent to:
std::string(str.toUtf8().constData())
UTF-16 or UCS-4 encoded std::wstring, 16- or 32 bits wide, respectively. The selection of 16- vs. 32-bit encoding is done by Qt to match the platform's width of wchar_t.
str.toStdWString()
U16 or U32 strings of C++11 - from Qt 5.5 onwards:
str.toStdU16String()
str.toStdU32String()
UTF-16 encoded 16-bit std::u16string - this hack is only needed up to Qt 5.4:
std::u16string(reinterpret_cast<const char16_t*>(str.constData()))
This encoding does not include byte order marks (BOMs).
It's easy to prepend BOMs to the QString itself before converting it:
QString src = ...;
src.prepend(QChar::ByteOrderMark);
#if QT_VERSION < QT_VERSION_CHECK(5,5,0)
auto dst = std::u16string{reinterpret_cast<const char16_t*>(src.constData()),
src.size()};
#else
auto dst = src.toStdU16String();
If you expect the strings to be large, you can skip one copy:
const QString src = ...;
std::u16string dst;
dst.reserve(src.size() + 2); // BOM + termination
dst.append(char16_t(QChar::ByteOrderMark));
dst.append(reinterpret_cast<const char16_t*>(src.constData()),
src.size()+1);
In both cases, dst is now portable to systems with either endianness.
Use this:
QString Widen(const std::string &stdStr)
{
return QString::fromUtf8(stdStr.data(), stdStr.size());
}
std::string Narrow(const QString &qtStr)
{
QByteArray utf8 = qtStr.toUtf8();
return std::string(utf8.data(), utf8.size());
}
In all cases you should have utf8 in std::string.
You can get the QByteArray from a UTF-16 encoded QString using this:
QTextCodec *codec = QTextCodec::codecForName("UTF-16");
QTextEncoder *encoderWithoutBom = codec->makeEncoder( QTextCodec::IgnoreHeader );
QByteArray array = encoderWithoutBom->fromUnicode( str );
This way you ignore the unicode byte order mark (BOM) at the beginning.
You can convert it to char * like:
int dataSize=array.size();
char * data= new char[dataSize];
for(int i=0;i<dataSize;i++)
{
data[i]=array[i];
}
Or simply:
char *data = array.data();
I have a string like that "2.1648797E -05" and I need to format it to convert "0.00021648797"
Is there any solution to do this conversion
try to use double or long long
cout << setiosflags(ios::fixed) << thefloat << endl;
An important characteristic of floating point is that they do not have precision associated with all the significant figures back to the decimal point for large values. The "scientific" display reasonably reflects the inherent internal storage realities.
In C++ you can use std::stringstream First print the number, then read it as double and then print it using format specifiers to set the accuracy of the number to 12 digits. Take a look at this question for how to print decimal number with fixed precision.
If you are really just going from string representation to string representation and precision is very important or values may leave the valid range for doubles then I would avoid converting to a double.
Your value may get altered by that due to precision errors or range problems.
Try writing a simple text parser. Roughly like that:
Read the digits, omitting the decimal point up to the 'E' but store the decimal point position.
After the 'E' read the exponent as a number and add that to your stored decimal position.
Then output the digits again properly appending zeros at beginning or end and inserting the decimal point.
There are unclear issues here
1. Was the space in "2.1648797E -05" intended, let's assume it is OK.
2. 2.1648797E-05 is 10 times smaller than 0.00021648797. Assume OP meant "0.000021648797" (another zero).
3. Windows is not tagged, but OP posted a Windows answer.
The major challenge here, and I think is the OP's core question is that std::precision() has different meanings in fixed versus default and the OP wants the default meaning in fixed.
Precision field differs between fixed and default floating-point notation. On default, the precision field specifies the maximum number of useful digits to display both before and after the decimal point, possible using scientific notation, while in fixed, the precision field specifies exactly how many digits to display after the decimal point.
2 approaches to solve this: Change the input string to a number and then output the number in the new fixed space format - that is presented below. 2nd method is to parse the input string and form the new format - not done here.
#include <iostream>
#include <iomanip>
#include <string>
#include <sstream>
#include <cmath>
#include <cfloat>
double ConvertStringWithSpaceToDouble(std::string s) {
// Get rid of pesky space in "2.1648797E -05"
s.erase (std::remove (s.begin(), s.end(), ' '), s.end());
std::istringstream i(s);
double x;
if (!(i >> x)) {
x = 0; // handle error;
}
std::cout << x << std::endl;
return x;
}
std::string ConvertDoubleToString(double x) {
std::ostringstream s;
double fraction = fabs(modf(x, &x));
s.precision(0);
s.setf(std::ios::fixed);
// stream whole number part
s << x << '.';
// Threshold becomes non-zero once a non-zero digit found.
// Its level increases with each additional digit streamed to prevent excess trailing zeros.
double threshold = 0.0;
while (fraction > threshold) {
double digit;
fraction = modf(fraction*10, &digit);
s << digit;
if (threshold) {
threshold *= 10.0;
}
else if (digit > 0) {
// Use DBL_DIG to define number of interesting digits
threshold = pow(10, -DBL_DIG);
}
}
return s.str();
}
int main(int argc, char* argv[]){
std::string s("2.1648797E -05");
double x = ConvertStringWithSpaceToDouble(s);
s = ConvertDoubleToString(x);
std::cout << s << std::endl;
return 0;
}
thanks guys and i fix it using :
Decimal dec = Decimal.Parse(str, System.Globalization.NumberStyles.Any);