Say I've got this value xxx in hex 007800780078
How can I convert back the hex value to characters using bitwise operations?
Can I?
I suppose you could do it using "bitwise" operations, but it'd probably be a horrendous mess of code as well as being totally unnecessary since ILE RPG can do it easily using appropriate built-in functions.
First is that you don't exactly have what's usually thought of as a "hex" value. That is, you're showing a hexadecimal representation of a value; but basic "hex" conversion will not give a useful result. What you're showing seems to be a UCS-2 value for "xxx".
Here's a trivial example that shows a conversion of that hexadecimal string into a standard character value:
d ds
d charField 6 inz( x'007800780078' )
d UCSField1 3c overlay( charField )
d TargetField s 6
d Length s 10i 0
/free
Length = %len( %trim( UCSField1 ));
TargetField = %trim( %char( UCSField1 ));
*inlr = *on;
return;
/end-free
The code has a DS that includes two sub-fields. The first is a simple character field that declares six bytes of memory initialized to x'007800780078'. The second sub-field is declared as data type 'C' to indicate UCS-2, and it overlays the first sub-field. Because it's UCS-2, its size is given as "3" to allow for three characters. (Each character is 16-bits wide.)
The executable statements don't do much, just enough to let you test the converted values. Using debug, you should see that Length comes out to be (3) and TargetField becomes 'xxx'.
The %CHAR() built-in function can be used to convert from UCS-2 to the character encoding used by the program. To go in the opposite direction, use the %UCS2() built-in function.
Related
I deal with lots of mathematical expressions in a certain Julia script and would like to know if storing such a formula as a String is ok, or whether using the Symbol data type is better. Thinking about scalability and keeping memory requirements to a minimum. Thanks!
Update: the application involves a machine learning model. Ideally, it should be applicable to big data too, hence the need for scalability.
In a string, each character is stored based on its number of codeunits, eg. 1 for ascii. The same is true for the characters of a Symbol. So that is a wash; do what fits your use best, probably Symbols since you are manipulating expressions.
An expression like :(x + y) is stored as a list of Any, with space allocated according to the sizeof each item in the expression.
In an expression like :(7 + 4 * 9) versus a string like "7 + 4 * 9" there are two conflicting issues. First, 7 is stored as 1 byte in the string, but 8 bytes in the expression since there are 64-bit Ints in play. On the other hand, whitespace takes up 1 byte each space in the string, but does not use memory in the expression. And a number like 123.123456789 takes up 14 bytes in the string and 8 in the expression (64 bit floats).
I think that, again, this is close to being even, and depends on the specific strings you are parsing. You could, as you work with the program, store both, compare memory usage of the resulting arrays, and drop one type of storage if you feel you should.
I am trying to read a .tif-file in julia as a Floating Point Array. With the FileIO & ImageMagick-Package I am able to do this, but the Array that I get is of the Type Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}.
I can convert this FixedPoint-Array to Float32-Array by multiplying it with 255 (because UInt8), but I am looking for a function to do this for any type of FixedPointNumber (i.e. reinterpret() or convert()).
using FileIO
# Load the tif
obj = load("test.tif");
typeof(obj)
# Convert to Float32-Array
objNew = real.(obj) .* 255
typeof(objNew)
The output is
julia> using FileIO
julia> obj = load("test.tif");
julia> typeof(obj)
Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
julia> objNew = real.(obj) .* 255;
julia> typeof(objNew)
Array{Float32,2}
I have been looking in the docs quite a while and have not found the function with which to convert a given FixedPoint-Array to a FloatingPont-Array without multiplying it with the maximum value of the Integer type.
Thanks for any help.
edit:
I made a small gist to see if the solution by Michael works, and it does. Thanks!
Note:I don't know why, but the real.(obj) .* 255-code does not work (see the gist).
Why not just Float32.()?
using ColorTypes
a = Gray.(convert.(Normed{UInt8,8}, rand(5,6)));
typeof(a)
#Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
Float32.(a)
The short answer is indeed the one given by Michael, just use Float32.(a) (for grayscale). Another alternative is channelview(a), which generally performs channel separation thus also stripping the color information from the array. In the latter case you won't get a Float32 array, because your image is stored with 8 bits per pixel, instead you'll get an N0f8 (= FixedPointNumbers.Normed{UInt8,8}). You can read about those numbers here.
Your instinct to multiply by 255 is natural, given how other image-processing frameworks work, but Julia has made some effort to be consistent about "meaning" in ways that are worth taking a moment to think about. For example, in another programming language just changing the numerical precision of an array:
img = uint8(255*rand(10, 10, 3)); % an 8-bit per color channel image
figure; image(img)
imgd = double(img); % convert to double-precision, but don't change the values
figure; image(imgd)
produces the following surprising result:
That second "all white" image represents saturation. In this other language, "5" means two completely different things depending on whether it's stored in memory as a UInt8 vs a Float64. I think it's fair to say that under any normal circumstances, a user of a numerical library would call this a bug, and a very serious one at that, yet somehow many of us have grown to accept this in the context of image processing.
These new types arise because in Julia we've gone to the effort to implement new numerical types (FixedPointNumbers) that act like fractional values (e.g., between 0 and 1) but are stored internally with the same bit pattern as the "corresponding" UInt8 (the one you get by multiplying by 255). This allows us to work with 8-bit data and yet allow values to always be interpreted on a consistent scale (0.0=black, 1.0=white).
I am trying to find a straightforward way to store negative values in EEPROM, integer values ranging from -20 to 20. I have been using EEPROM.write and EEPROM.read functions to store strings one character at a time, but I am having trouble with negative numbers. I figure I only need one byte for this value.
It's just matter of number representation. You just have to use correct data types to print or use:
Version 1: int8_t data = EEPROM.read(addr);
Version 2:
byte data = EEPROM.read(addr);
Serial.print((int8_t)data);
EEPROM.write can be used directly with int8_t: EEPROM.write(int8_value);
Or, if you wan't int, put/get methods can be used for it (even for structs containing POD types only or so)
It looks like the default for WinDbg is to display ints in decimal and unsigned ints in hexadecimal.
Is there a way to show all in decimal?
I tried using the n command mentioned here
It gives me syntax error though:
:086> n[10]
^ Syntax error in 'n[10]'
Any idea what am I doing wrong?
It seems that you are using square brackets when you shouldn't. On the MSDN page, those square brackets are there to show that the radix argument is optional.
When the argument is left off, the current radix is displayed to you.
0:000> n
base is 10
When you provide the argument (with no square brackets) the current radix is changed and echoed back to you.
0:000> n 16
base is 16
A commonly used trick once the base is set is to use the ? (Evaluate Expression) command to convert numbers to the new base (in this example, base 16).
0:000> ? 0n10
Evaluate expression: 10 = 0000000a
0:000> ? 0y11
Evaluate expression: 11 = 00000003
To convert from hex (base 16) back to decimal:
0:000> ? a
Evaluate expression: 10 = 0000000a
Remember that once the base is set, both input and output are affected meaning that when you want to enter a number that isn't is the current base, you will need to specify the base as was done above in the final example. Further reading on how numbers are handled in the MASM-like syntax is available here.
But back to your original question...
Yes, n 10 should be enough to force numbers to be displayed in decimal. If for some reason there is a problem, you can always use the ? command as shown above to perform the conversion.
Extended article describing how WinDbg evaluates expressions (including details on the impact of the n command) available here:
https://www.osronline.com/article.cfm?id=540
try using the command:-
.enable_long_status 0
I want to convert decimal number to hexadecimal in Embedded C.
Actually in my project input to controller is decimal which will be subtracted from hexadecimal value, so I need a conversion.
I tried to write it but after converting a number (75 for example) to its hexadecimal equivalent it will be (411). The trouble is that I did not know how to convert a number like 11 to b in hexadecimal as you know that there is no 11 in hexadecimal, it is b; so please help.
I want to save the converted value in a flag (for subtracting), not for printing. I can print a hex value by simple put a condition like:
(if (a > 10) printf("b"))
but this is not a solution for Embedded.
So please give me a complete solution.
I am not sure what you mean but your integer is just the "type of interpretation". In your memory it's just a bunch of 0 and 1, and therefore you can also presentate that "data" in an decimal, hexadecimal way.
If you need it as input for a register, you can just pass it into it.
Or do I missunderstand your problem?