I am using openresty/1.7.7.2 with Lua 5.1.4. I am receiving int64 in request and i have it's string format saved in DB (can't change DB schema or request format). I am not able to match both of them.
local i = 913034578410143848 --request
local p = "913034578410143848" -- stored in DB
print(p==tostring(i)) -- return false
print(i%10) -- return 0 ..this also doesn't work
Is there a way to convert int64 to string and vice versa if possible?
update:
I am getting i from protobuf object. proto file describe i as int64. I am using pb4lua library for protobuf.
ngx.req.read_body()
local body = ngx.req.get_body_data()
local request, err = Request:load(body)
local i = request.id
Lua 5.1 can not represent integer values larger than 2^53.
Number literal not excaption. So you can not just write
local i = 913034578410143848.
But LuaJIT can represent int64 values like boxed values.
Also there exists Lua libraries to make deal with large numbers.
E.g. bn library.
I do not know how your pb4lua handle this problem.
E.g. lua-pb library uses LuaJIT boxed values.
Also it provide way to specify user defined callback to make int64 value.
First I suggest figure out what real type of your i value (use type function).
All other really depends on it.
If its number then I think pb4lua just loose some info.
May be it just returns string type so you can just compare it as string.
If it provide LuaJIT cdata then this is basic function to convert string
to int64 value.
local function to_jit_uint64(str)
local v = tonumber(string.sub(str, 1, 9))
v = ffi.new('uint64_t', v)
if #str > 9 then
str = string.sub(str, 10)
v = v * (10 ^ #str) + tonumber(str)
end
return v
end
Related
I have the following fragment of code
with GNAT.Command_Line; use GNAT.Command_Line;
with GNAT.Strings; use GNAT.Strings;
....
Define_Switch
(Config => Config, Output => File_Name'Access,
Long_Switch => "--file=", Switch => "-f=",
Help => "File with Composition");
....
Getopt
After parsing command line via Getopt I have access object that points to actual file name
I would like to copy this name to Ada.String.Fixed string that definded as
File_Name : String(1 .. 256);
I can print to console data from File_Name'Access as
Put_Line(File_Name.all);
I think I should provide something like copy operation then free access object.
How can I do it?
Thanks.
Alex
I guess File_Name in the code snippet defined as 'aliased GNAT.Strings.String_Access'. This is a "fat pointer" to the string object. "Fat" means it is not an address only, it is range of indices of the string. C-style Nil terminator is not used in Ada, and Nil is valid character.
You can copy data inside this string object into the another standard String object playing with indexes computations, but usually you must not do this: there is no Nil terminator, you will need to pass length of the actual data; destination string object may be smaller than necessary, and data will be truncated or exception raised; etc.
There are two right ways to do this. First one is to declare unconstrained string object and assign value to it.
declare
Fixed_File_Name : String := File_Name.all;
begin
Free (File_Name);
or use variable length string (bounded or unbounded):
declare
Unbounded_File_Name : Ada.Strings.Unbounded.Unbounded_String;
begin
Unbounded_File_Name :=
Ada.Strings.Unbounded.To_Unbounded_String (File_Name.all);
Free (File_Name.all);
Use of fixed string has important restriction: string object must be initialized exactly at the point of declaration of the object, and available only inside corresponding block/subprogram. Use of variable length string allows to declare string object outside of the scope of particular block/subprogram.
Swiftui dictionaries have the feature that the value returned by using key access is always of type "optional". For example, a dictionary that has type String keys and type String values is tricky to access because each returned value is of type optional.
An obvious need is to assign x=myDictionary[key] where you are trying to get the String of the dictionary "value" into the String variable x.
Well this is tricky because the String value is always returned as an Optional String, usually identified as type String?.
So how is it possible to convert the String?-type value returned by the dictionary access into a plain String-type that can be assigned to a plain String-type variable?
I guess the problem is that there is no way to know for sure that there exists a dictionary value for the key. The key used to access the dictionary could be anything so somehow you have to deal with that.
As described in #jnpdx answer to this SO question (How do you assign a String?-type object to a String-type variable?), there are at least three ways to convert a String? to a String:
import SwiftUI
var x: Double? = 6.0
var a = 2.0
if x != nil {
a = x!
}
if let b = x {
a = x!
}
a = x ?? 0.0
Two key concepts:
Check the optional to see if it is nil
if the optional is not equal to nil, then go ahead
In the first method above, "if x != nil" explicitly checks to make sure x is not nil be fore the closure is executed.
In the second method above, "if let a = b" will execute the closure as long as b is not equal to nil.
In the third method above, the "nil-coalescing" operator ?? is employed. If x=nil, then the default value after ?? is assigned to a.
The above code will run in a playground.
Besides the three methods above, there is at least one other method using "guard let" but I am uncertain of the syntax.
I believe that the three above methods also apply to variables other than String? and String.
I use a mariaDB/mysql database to "reverse engineer" the structure of binary data.
A column called "data" contains lets say 152 byte long binary "blobs".
Within another columns I have listed what kind of information I expect:
e.g. col2: "timestamp" (unix formatted), col3: "size1" uint32, col4: "length" 4 byte float, .... etc.
All these columns contains NULL when I start.
Now I put down a thesis: byte 8:12 from the blob in "data" is a 4 byte int and decodes to "size1" and so for.
To evaluate my thesis I want to fill in col3/"size1" with extraction from the blob in "data".
So what I would need now is a SQL (mariadb) UPDATE which extracts the bytes from the blob and stores it in "col3".
One column is called "pulse_max". With this it works nice, because I need only 1 byte:
UPDATE dictionary2
SET pulse_max=ascii(SUBSTRING(data, 147, 1));
But how can I "convert"/unpack" larger (2, 4 byte etc.) parts of the blob to int, uint, double, unixtimestamp, .... etc.?
Things like these fail: (column "tracklenght" is a 4 byte uint.)
UPDATE dictionary2 SET tracklength= binary(SUBSTRING(data, 113, 4));
==> Warning: #1366 Falscher integer-Wert: '\x94L\xFEE' für Feld 'tracklength' in Zeile 1
UPDATE dictionary2 SET duration_time=CONV(REVERSE(HEX(substring(data,109,4))),16,10);
==> This gives nonsense (my source data is little endian). Maybe the 4 bytes are ordered wrong?
I KNOW I could do all this with C/perl/python/etc. (htonl, unpack etc.) - however these programs do not allow me to store many datasamples and allow a mass evaluation of my thesis.
Of course I can use python/perl mariadb connectors. However I would like to do it directly in SQL and STORE the routines in the database.
Well I had a few posts to it. I deleted them all, because now I have a halfway nice solution. I expected an integrated "function" by SQL/mariadb. That was not the case so I wrote my own. There you are, if you are looking for a similar solution:
DROP FUNCTION IF EXISTS unpackf4;
DELIMITER //
CREATE FUNCTION unpackf4(b BLOB(4))
RETURNS FLOAT DETERMINISTIC
BEGIN
DECLARE f_value FLOAT;
DECLARE binstr CHAR(32);
SET binstr = LPAD(CONV(HEX(CAST(reverse(b) AS CHAR(10000) CHARACTER SET
utf8)),16,2),32,'0');
SET f_value =
POW(-1,CONV(SUBSTRING( binstr , 1, 1),2,10))*
POW( 2,CONV(SUBSTRING( binstr , 2, 8),2,10)-127)*
(1+ CONV(SUBSTRING( binstr , 10,23),2,10)/POW(2,23));
RETURN f_value;
END; //
DELIMITER ;
With this function one can simply "convert/unpack" a blob column to float32 (double64 will be analogous):
UPDATE dictionary2 SET tracklength = unpackf4(substring(data,113,4));
While obviously DATA is a blob column and tracklenght a float column.
The http request header has a 4k length limit.
I want to split the string which I want to include in the header based on this limit.
Should I use []byte(str) to split first then convert back to string using string([]byte) for each split part?
Is there any simpler way to do it?
In Go, a string is really just a sequence of bytes, and indexing a string produces bytes. So you could simply split your string into substrings by slicing it into 4kB substrings.
However, since UTF-8 characters can span multiple bytes, there is the chance that you will split in the middle of a character sequence. This isn't a problem if the split strings will always be joined together again in the same order at the other end before decoding, but if you try to decode each individually, you might end up with invalid leading or trailing byte sequences. If you want to guard against this, you could use the unicode/utf8 package to check that you are splitting on a valid leading byte, like this:
package httputil
import "unicode/utf8"
const maxLen = 4096
func SplitHeader(longString string) []string {
splits := []string{}
var l, r int
for l, r = 0, maxLen; r < len(longString); l, r = r, r+maxLen {
for !utf8.RuneStart(longString[r]) {
r--
}
splits = append(splits, longString[l:r])
}
splits = append(splits, longString[l:])
return splits
}
Slicing the string directly is more efficient than converting to []byte and back because, since a string is immutable and a []byte isn't, the data must be copied to new memory upon conversion, taking O(n) time (both ways!), whereas slicing a string simply returns a new string header backed by the same array as the original (taking constant time).
I wrote this function to generate random unique id's for my test cases:
func uuid(t *testing.T) string {
uidCounterLock.Lock()
defer uidCounterLock.Unlock()
uidCounter++
//return "[" + t.Name() + "|" + strconv.FormatInt(uidCounter, 10) + "]"
return "[" + t.Name() + "|" + string(uidCounter) + "]"
}
var uidCounter int64 = 1
var uidCounterLock sync.Mutex
In order to test it, I generate a bunch of values from it in different goroutines, send them to the main thread, which puts the result in a map[string]int by doing map[v] = map[v] + 1. There is no concurrent access to this map, it's private to the main thread.
var seen = make(map[string]int)
for v := range ch {
seen[v] = seen[v] + 1
if count := seen[v]; count > 1 {
fmt.Printf("Generated the same uuid %d times: %#v\n", count, v)
}
}
When I just cast the uidCounter to a string, I get a ton of collisions on a single key. When I use strconv.FormatInt, I get no collisions at all.
When I say a ton, I mean I just got 1115919 collisions for the value [TestUuidIsUnique|�] out of 2227980 generated values, i.e. 50% of the values collide on the same key. The values are not equal. I do always get the same number of collisions for the same source code, so at least it's somewhat deterministic, i.e. probably not related to race conditions.
I'm not surprised integer overflow in a rune would be an issue, but I'm nowhere near 2^31, and that wouldn't explain why the map thinks 50% of the values have the same key. Also, I wouldn't expect a hash collision to impact correctness, just performance, since I can iterate over the keys in a map, so the values are stored there somewhere.
In the output, all runes printed are 0xEFBFBD. It's the same number of bits as the highest valid unicode code point, but that doesn't really match either.
Generated the same uuid 2 times: "[TestUuidIsUnique|�]"
Generated the same uuid 3 times: "[TestUuidIsUnique|�]"
Generated the same uuid 4 times: "[TestUuidIsUnique|�]"
Generated the same uuid 5 times: "[TestUuidIsUnique|�]"
...
Generated the same uuid 2047 times: "[TestUuidIsUnique|�]"
Generated the same uuid 2048 times: "[TestUuidIsUnique|�]"
Generated the same uuid 2049 times: "[TestUuidIsUnique|�]"
...
What's going on here? Did the go authors assume that hash(a) == hash(b) implies a == b for strings? Or am I just missing something silly? go test -race isn't complaining either.
I'm on macOS 10.13.2, and go version go1.9.2 darwin/amd64.
String conversion of an invalid rune returns a string containing the unicode replacement character: "�".
Use the strconv package to convert an integer to text.