The Tcl expr function supports arguments written in hex notation: operands which begin with 0x are treated as integers written in hex form.
However the return value of expr is always in decimal form: expr 0xA + 0xA returns 20, not 0x14.
Is there a way to tell expr to return the hex representation?
Is there a Tcl function which converts decimal representation to hex?
the format command is what you're after:
format 0x%x [expr {0xa + 0xa}] ;# ==> 0x14
I'd like to elaborate on the glenn's post to make things more clear for Vahagn.
expr does not return its result in one representation or another, instead, it returns a value in some suitable internal format (an integer, a big integer, a floating point value etc). What you see doing your testing is just Tcl interpreter converting what expr returned to an appropriate textual form using a default conversion to a string which, for integers, naturally uses base 10.
This conversion takes place in your case solely because you wanted to display the value returned by expr, and displaying of (any) values naturally tends to convert them to strings if they are "printed"—to a terminal, to a tkcon's window etc.
By using format you enforce whatever string representation you want instead of the default one. Since format already returns a value which is a string internally, no conversion takes place when it's printed.
Related
Consider the following code in built-in-library-tests.robot:
***Test Cases***
Use "Convert To Hex"
${hex_value} = Convert To Hex 255 base=10 prefix=0x # Result is 0xFF
# Question: How does the following statement work step by step?
Should Be True ${hex_value}==${0xFF} #: is ${0xFF} considered by Robot a string or a integer value in base 16?
# To Answer My Own Question, here is an hypothesis solution:
# For python to execute the expression:
# Should Be True a_python_expression_in_a_string_without_quotes # i.e. 0xFF==255
# To reach that target, i think of a 2 step solution:
# STEP 1: When a variable is used in the expressing using the normal ${hex_value} syntax, its value is replaced before the expression is evaluated.
# This means that the value used in the expression will be the string representation of the variable value, not the variable value itself.
Should Be True 0xFF==${0xFF}
# Step 2: When the hexadecimal value 0xFF is given in ${} decoration, robot converts the value to its
# integer representation 255 and puts the string representation of 255 into the the expression
Should Be True 0xFF==255
The test above passes with all its steps. I want to check with my community, is my 2 step hypothesis solution correct or not? Does Robot exactly go through these steps, before evaluating the final expression 0xFF==255 in Python?
Robot receives the expression as the string ${hex_value}==${0xFF}. It then performs variable substitution, yielding the string 0xFF==255. This string is then passed to python's eval statement.
The reason for the right hand side being 255 is described in the user guide:
It is possible to create integers also from binary, octal, and hexadecimal values using 0b, 0o and 0x prefixes, respectively. The syntax is case insensitive.
${0xFF} gets replaced with 255, and ${hex_value} gets substituted with whatever is in that variable. In this case, that variable contains the four bytes 0xFF.
Thus, ${hex_value}==${0xFF} gets converted to 0xFF==255, and that gets passed to eval as a string.
In other words, it's exactly the same as if you had typed eval("0xFF==255") at a python interactive prompt.
As a newcomer to Julia this month, Sept. 2018, I am just getting used to the initially unfamiliar "#" symbol for macros and "!" symbol for functions with mutable inputs. Am I right to assume that these are merely stylistic symbols for humans to read, and that they do not really provide any information to the compiler?
I bring this up in the context of the following code that does not seem to match the style of a macro, a function, or anything else in Julia I am aware of. I am specifically asking about big"1234" below:
julia> big"1234" # big seems to be neither a macro or a function.
1234
julia> typeof(big"1234")
BigInt
julia> typeof(BigInt(1234))
BigInt
My question is: What is big in big"1234"?
Edit: I think I got my answer based on a comment at https://discourse.julialang.org/t/bigfloat-promotion-rules-and-constants-in-functions/14573/4
"Note that because decimal literals are converted to floating point numbers when parsed, BigFloat(2.1) may not yield what you expect. You may instead prefer to initialize constants from strings via parse, or using the big string literal.
julia> BigFloat(2.1)
2.100000000000000088817841970012523233890533447265625
julia> big"2.1"
2.099999999999999999999999999999999999999999999999999999999999999999999999999986"
Thus, based on the above comment, big in big"1234" is a "big string literal."
Edit 2: The above is a start at the answer, but the accepted answer below is much more complete.
These are Non-Standard String Literals. They tell the compiler that xyz"somestring" should be parsed via a macro function named #xyz_str.
The difference between BigFloat(2.1) and big"2.1" is that the former does convert the standard Float64 representation of the "numeric" literal 2.1 to BigFloat but the latter parses the string "2.1" directly (without interpreting it as a numeric literal) with the macro #big_str to compute the BigFloat representation.
You can also define your Non-Standard String Literals. LaTeXStrings.jl for example uses it to make it easier to type LaTeX equations.
Please take a look at: https://docs.julialang.org/en/v1/manual/metaprogramming/#Non-Standard-String-Literals-1
Using Julia, I'd like to reliably convert any type into type String. There seems to be two ways to do the conversion in v0.5, either the string function or String constructor. The problem is that you need to choose the right one depending upon the input type.
For example, typeof(string(1)) evaluates to String, but String(1) throws an error. On the other hand, typeof(string(SubString{String}("a"))) evaluates to Substring{String}, which is not a subtype of String. We instead need to do String(SubString{String}("a")).
So it seems the only reliable way to convert any input x to type String is via the construct:
String(string(x))
which feels a bit cumbersome.
Am I missing something here?
You should rarely need to explicitly convert to String. Note that even if your type definitions have String fields, or if your arrays have concrete element type String, you can still rely on implicit conversion.
For instance, here are examples of implicit conversion:
type TestType
field::String
end
obj = TestType(split("x y")[1]) # construct TestType with a SubString
obj.field # the String "x"
obj.field = SubString("Hello", 1, 3) # assign a SubString
obj.field # the String "Hel"
I was wondering how to do encoding and decoding in R. In Python, we can use ord('a') and chr(97) to transform a letter to number or transform a number to a letter. Do you know any similar functions in R? Thank you!
For example, in python
>>>ord("a")
97
>>>ord("A")
65
>>>chr(97)
'a'
>>>chr(90)
'Z'
FYI:
ord(c) in Python
Given a string of length one, return an integer representing the Unicode code point of the character when the argument is a unicode object, or the value of the byte when the argument is an 8-bit string. For example, ord('a') returns the integer 97, ord(u'\u2020') returns 8224. This is the inverse of chr() for 8-bit strings and of unichr() for unicode objects. If a unicode argument is given and Python was built with UCS2 Unicode, then the character’s code point must be in the range [0..65535] inclusive; otherwise the string length is two, and a TypeError will be raised.
chr(i) in Python
Return a string of one character whose ASCII code is the integer i. For example, chr(97) returns the string 'a'. This is the inverse of ord(). The argument must be in the range [0..255], inclusive; ValueError will be raised if i is outside that range. See also unichr().
You're looking for utf8ToInt and intToUtf8
utf8ToInt("a")
[1] 97
intToUtf8(97)
[1] "a"
What is the cause of certain characters to be blank when using XOR encryption? Furthermore, how can this be compensated for when decrypting?
For instance:
....
void basic_encrypt(char *to_encrypt) {
char c;
while (*to_encrypt) {
*to_encrypt = *to_encrypt ^ 20;
to_encrypt++;
}
}
will return "nothing" for the character k. Clearly, character decay is problematic for decryption.
I assume this is caused by the bit operator, but I am not very good with binary so I was wondering if anyone could explain.
Is it converting an element, k, in this case, to some spaceless ASCII character? Can this be compensated for by choosing some y < x < z operator where x is the operator?
Lastly, if it hasn't been compensated for, is there a realistic decryption strategy for filling in blanks besides guess and check?
'k' has the ASCII value 107 = 0x6B. 20 is 0x14, so
'k' ^ 20 == 0x7F == 127
if your character set is ASCII compatible. 127 is \DEL in ASCII, which is a non-printable character, so won't be displayed if you print it out.
You will have to know the difference between bytes and characters to understand which is happening. On the one hand you have the C char type, which is simply a presentation of a byte, not a character.
In the old days each character was mapped to one byte or octet value in a character encoding table, or code page. Nowadays we have encodings that take more bytes for certain characters, e.g. UTF-8, or even encodings that always take more than one byte such as UTF-16. The last two are unicode encodings, which means that each character has a certain number value and the encoding is used to encode this number into bytes.
Many computers will interpret bytes in ISO/IEC 8859-1 or Latin-1, sometimes extended by Windows-1252. These code pages have holes for control characters, or byte values that are simply not used. Now it depends on the runtime system how these values are handled. Java by default substitutes an ? character in place of the missing character. Other runtimes will simply drop the value or - of course - execute the control code. Some terminals may use the ESC control code to set the color or to switch to another code page (making a mess of the screen).
This is why ciphertext should be converted to another encoding, such as hexadecimals or Base64. These encodings should make sure that the result is readable text. This takes care of the cipher text. You will have to choose a character set for your plain text too, e.g. simply perform ASCII or UTF-8 encoding before encryption.
Getting a zero value from encryption does not matter because once you re-xor with the same xor key you get the original value.
value == value
value XOR value == 0 [encryption]
( value XOR value ) XOR value == value [decryption]
If you're using a zero-terminated string mechanism, then you have two main strategies for preventing 'character degradation'
store the length of the string before encryption and make sure to decrypt at least that number of characters on decryption
check for a zero character after decoding the character