Getting a number from QString then converting it to integer - qt

Iam developing a QT software which is using telnet commands to get some information from the adsl modem link.
i got all those information to a QString
QString datarate = ui->output->toPlainText();
i want to select only a number
after downstream rate
Downstream rate = 10239 Kbps
and convert it to integer to compare it with other numbers ...
i don't want to take the first one which is = 20892 kbps
Status: Showtime
Max: Upstream rate = 1193 Kbps, Downstream rate = 20892 Kbps
Bearer: 0, Upstream rate = 1021 Kbps, Downstream rate = 10239 Kbps
any advices??
note : the number will be random so adsl status are much different

Maybe something like this will work:
QString datarate = ui->output->toPlainText();
int number = datarate.split("Downstream rate = ")[2].split(" ")[0].toInt();
This is taking the following steps:
Breaking up the string into pieces separated by "Downstream rate = "
Taking the third string in that list (Should contain "10239 Kbps")
Splitting that by space characters and taking the first token. (Should contain "10239")
Finally converting the string to an int.

Related

Why do DynamoDB imported size and actual table size differ so much?

I have imported a DynamoDB table from S3. Here are the dataset sizes at each step:
Compressed dataset in S3 (DynamoDB JSON format with GZIP compression) = 13.3GB
Imported table size according to the DynamoDB imports page (uncompressed) = 64.4GB
Imported item count = 376 736 126
Current table size according to the DynamoDB tables page (compressed?) = 41.5GB (less than at the import time!)
Item count = 380 528 674 (I have performed some insertions already)
Since the import time, the table was only growing.
What's the reason of a much lesser estimation of the actual table size? Is it because of an approximation of DynamoDB tables sizes in general? Or does DynamoDB apply any compression to the stored data?
The source S3 dataset should not have any duplicates: it is built using Athena by running a GROUP BY query on the DynamoDB table's key. So, I do not expect it to be a cause.
Each item has 4 attributes: PK is 2 long strings (blockchain addresses = 40 hex chars + extra ≈2–6 characters) + 1 long string (uint256 balance as a hex string ≤ 64 characters) + 1 numeric value. Table import format is DynamoDB JSON.
DynamoDB performs no compression.
The likely cause is how you are calculating the size of the table and number of items. If you are relying on the tables metadata, it is only updated every 6 hours and is an approximate value, which should not be relied upon for comparisons or validity checks.
The reason is in the DynamoDB JSON format's overhead. I am the author of the question, so an exact example should provide more clarity. Here is a random item I have:
{"Item":{"w":{"S":"65b88d0d0a1223eb96bccae06317a3155bc7e391"},"sk":{"S":"43ac8b882b7e06dde7c269f4f8aaadd5801bd974_"},"b":{"S":"6124fee993bc0000"},"n":{"N":"12661588"}}}
When imports from S3, DynamoDB import functionality bills per the total read uncompressed size. Which for this item results in 169 bytes (168 chars + newline).
However, when stored to DynamoDB, this item only occupies its fields capacity (see DynamoDB docs):
The size of a string is (length of attribute name) + (number of UTF-8-encoded bytes).
The size of a number is approximately (length of attribute name) + (1 byte per two significant digits) + (1 byte).
For this specific item the DynamoDB's native size estimation is:
w (string) = 1 + 40 chars
sk (string) = 2 + 41 chars
b (string) = 1 + 16 chars
n (number) = 1 + (8 significant digits / 2 = 4) + 1
Total is 107 bytes. Actually, current DynamoDB's estimation for this table is 108.95 bytes per item on average which is pretty close (some fields values vary in length, this particular example is nearly the shortest possible).
This results in about 100% – 108.95 / 169 = 35% size reduction when the data is actually stored in DynamoDB compared to the imported size. Which is very close to the results I have reported in the question: 64.4GB * 108.95 / 169 = 40.39GB ≈ 41.5GB.

Why do you divide the raw data by 16?

http://datasheets.maximintegrated.com/en/ds/DS18B20.pdf
Read page 3, Operation – Measuring Temperature. The following code works to get the temp. I understand all of it except why they divide the number by 16.
local raw = (data[1] << 8) | data[0];
local SignBit = raw & 0x8000; // test most significant bit
if (SignBit) {raw = (raw ^ 0xffff) + 1;} // negative, 2's compliment
local celsius = raw / 16.0;
if (SignBit) {celsius *= -1;}
I’ve got another situation http://dlnmh9ip6v2uc.cloudfront.net/datasheets/Sensors/Pressure/MPL3115A2.pdf Page 23, section 7.1.3, temperature data. It’s only twelve bits, so the above code works for it also (just change the left shift to 4 instead of 8), but again, the /16 is required for the final result. I don’t get where that is coming from.
The raw temperature data is in units of sixteenths of a degree, so the value must be divided by 16 in order to convert it to degrees.

Understanding Adruino Binary to Decimal Conversations

I was looking at some code today for integrating a real time clock with an arduino and it had some binary to decimal (and vice versa) that I don't fully understand.
The code in question is below:
byte decToBcd(byte val)
{
return ( (val/10*16) + (val%10) );
}
byte bcdToDec(byte val)
{
return ( (val/16*10) + (val%16) );
}
ex: decToBcd(12);
I really fail to grasp how this works. I am not sure I understand the math, or if some sort of assumptions are being taken advantage of.
Would someone mind explaining how exactly the math and data types below are supposed to work? If possible touching on why the value "16" is used in the conversions instead of "8" when we are supposed to be working with a byte value.
For context, the full code can be found here: http://www.codingcolor.com/microcontrollers/an-arduino-lcd-clock-using-a-chronodot-rtc/
The key hint here is BCD - Binary-coded decimal - in the function name. In BCD each decimal digit is represented by four bits (half of a byte). As a result the maximum (decimal) number you can store using BCD notation is 99 - 9 in the upper nibble (half of the byte) and 9 in the lower nibble.
Let's take a look at number 12 as an example. Number 12 looks as follows in the binary notation:
12 = %00001010
However in BCD it looks as follows:
12 = %00010010
because
0001 0010
1 2
Now if you look at the decToBcd function val%10 is responsible for calculating the value of the ones place (i.e. the last digit). Since this goes to the lower part of the byte we don't need to do anything special here. val/10*16 first calculates the value of the tens place - val/10. However since the value has to go to the upper half of the byte it needs to be shifted up by four bits - hence *16. Another (in my opinion more readable) way of writing this function would be:
((val / 10) << 4) | (val % 10)
The bcdToDec does the reverse conversion.
RTC usually stores Year in 1 byte as 2 digits only, i.e: 2014 is 14.
And some of them stores it as a number from the year 1970 so 2014 = 44.
So maximum it can hold is 99 in both cases.

How to efficiently convert a few bytes into an integer between a range?

I'm writing something that reads bytes (just a List<int>) from a remote random number generation source that is extremely slow. For that and my personal requirements, I want to retrieve as few bytes from the source as possible.
Now I am trying to implement a method which signature looks like:
int getRandomInteger(int min, int max)
I have two theories how I can fetch bytes from my random source, and convert them to an integer.
Approach #1 is naivé . Fetch (max - min) / 256 number of bytes and add them up. It works, but it's going to fetch a lot of bytes from the slow random number generator source I have. For example, if I want to get a random integer between a million and a zero, it's going to fetch almost 4000 bytes... that's unacceptable.
Approach #2 sounds ideal to me, but I'm unable come up with the algorithm. it goes like this:
Lets take min: 0, max: 1000 as an example.
Calculate ceil(rangeSize / 256) which in this case is ceil(1000 / 256) = 4. Now fetch one (1) byte from the source.
Scale this one byte from the 0-255 range to 0-3 range (or 1-4) and let it determine which group we use. E.g. if the byte was 250, we would choose the 4th group (which represents the last 250 numbers, 750-1000 in our range).
Now fetch another byte and scale from 0-255 to 0-250 and let that determine the position within the group we have. So if this second byte is e.g. 120, then our final integer is 750 + 120 = 870.
In that scenario we only needed to fetch 2 bytes in total. However, it's much more complex as if our range is 0-1000000 we need several "groups".
How do I implement something like this? I'm okay with Java/C#/JavaScript code or pseudo code.
I'd also like to keep the result from not losing entropy/randomness. So, I'm slightly worried of scaling integers.
Unfortunately your Approach #1 is broken. For example if min is 0 and max 510, you'd add 2 bytes. There is only one way to get a 0 result: both bytes zero. The chance of this is (1/256)^2. However there are many ways to get other values, say 100 = 100+0, 99+1, 98+2... So the chance of a 100 is much larger: 101(1/256)^2.
The more-or-less standard way to do what you want is to:
Let R = max - min + 1 -- the number of possible random output values
Let N = 2^k >= mR, m>=1 -- a power of 2 at least as big as some multiple of R that you choose.
loop
b = a random integer in 0..N-1 formed from k random bits
while b >= mR -- reject b values that would bias the output
return min + floor(b/m)
This is called the method of rejection. It throws away randomly selected binary numbers that would bias the output. If min-max+1 happens to be a power of 2, then you'll have zero rejections.
If you have m=1 and min-max+1 is just one more than a biggish power of 2, then rejections will be near half. In this case you'd definitely want bigger m.
In general, bigger m values lead to fewer rejections, but of course they require slighly more bits per number. There is a probabilitistically optimal algorithm to pick m.
Some of the other solutions presented here have problems, but I'm sorry right now I don't have time to comment. Maybe in a couple of days if there is interest.
3 bytes (together) give you random integer in range 0..16777215. You can use 20 bits from this value to get range 0..1048575 and throw away values > 1000000
range 1 to r
256^a >= r
first find 'a'
get 'a' number of bytes into array A[]
num=0
for i=0 to len(A)-1
num+=(A[i]^(8*i))
next
random number = num mod range
Your random source gives you 8 random bits per call. For an integer in the range [min,max] you would need ceil(log2(max-min+1)) bits.
Assume that you can get random bytes from the source using some function:
bool RandomBuf(BYTE* pBuf , size_t nLen); // fill buffer with nLen random bytes
Now you can use the following function to generate a random value in a given range:
// --------------------------------------------------------------------------
// produce a uniformly-distributed integral value in range [nMin, nMax]
// T is char/BYTE/short/WORD/int/UINT/LONGLONG/ULONGLONG
template <class T> T RandU(T nMin, T nMax)
{
static_assert(std::numeric_limits<T>::is_integer, "RandU: integral type expected");
if (nMin>nMax)
std::swap(nMin, nMax);
if (0 == (T)(nMax-nMin+1)) // all range of type T
{
T nR;
return RandomBuf((BYTE*)&nR, sizeof(T)) ? *(T*)&nR : nMin;
}
ULONGLONG nRange = (ULONGLONG)nMax-(ULONGLONG)nMin+1 ; // number of discrete values
UINT nRangeBits= (UINT)ceil(log((double)nRange) / log(2.)); // bits for storing nRange discrete values
ULONGLONG nR ;
do
{
if (!RandomBuf((BYTE*)&nR, sizeof(nR)))
return nMin;
nR= nR>>((sizeof(nR)<<3) - nRangeBits); // keep nRangeBits random bits
}
while (nR >= nRange); // ensure value in range [0..nRange-1]
return nMin + (T)nR; // [nMin..nMax]
}
Since you are always getting a multiple of 8 bits, you can save extra bits between calls (for example you may need only 9 bits out of 16 bits). It requires some bit-manipulations, and it is up to you do decide if it is worth the effort.
You can save even more, if you'll use 'half bits': Let's assume that you want to generate numbers in the range [1..5]. You'll need log2(5)=2.32 bits for each random value. Using 32 random bits you can actually generate floor(32/2.32)= 13 random values in this range, though it requires some additional effort.

Where in windows registry the console width is stored?

My default console width is 80, but when I look into HKCU\Console there isn't a name that has this value. Only one that has supposedly to do with with is: WindowSize but it has value of 0x190050, that is dec: 1638480. Do the last two digits of it represent value I'm searching for ?
In HKCU\Console
0x19 = 25
0x50 = 80
So this is 25x80
In decimal, it's rows times 65,536 plus columns. (25 * 65536) + 80 = 1638480
Documentation is here.

Resources