Hexadecimal to decimal in windbg - hex

It looks like the default for WinDbg is to display ints in decimal and unsigned ints in hexadecimal.
Is there a way to show all in decimal?
I tried using the n command mentioned here
It gives me syntax error though:
:086> n[10]
^ Syntax error in 'n[10]'
Any idea what am I doing wrong?

It seems that you are using square brackets when you shouldn't. On the MSDN page, those square brackets are there to show that the radix argument is optional.
When the argument is left off, the current radix is displayed to you.
0:000> n
base is 10
When you provide the argument (with no square brackets) the current radix is changed and echoed back to you.
0:000> n 16
base is 16
A commonly used trick once the base is set is to use the ? (Evaluate Expression) command to convert numbers to the new base (in this example, base 16).
0:000> ? 0n10
Evaluate expression: 10 = 0000000a
0:000> ? 0y11
Evaluate expression: 11 = 00000003
To convert from hex (base 16) back to decimal:
0:000> ? a
Evaluate expression: 10 = 0000000a
Remember that once the base is set, both input and output are affected meaning that when you want to enter a number that isn't is the current base, you will need to specify the base as was done above in the final example. Further reading on how numbers are handled in the MASM-like syntax is available here.
But back to your original question...
Yes, n 10 should be enough to force numbers to be displayed in decimal. If for some reason there is a problem, you can always use the ? command as shown above to perform the conversion.

Extended article describing how WinDbg evaluates expressions (including details on the impact of the n command) available here:
https://www.osronline.com/article.cfm?id=540

try using the command:-
.enable_long_status 0

Related

Factorial digit sum in APL (Project Euler 20)

First I found +/⍎¨⍕(!8) and it gave me the result 9. But if I do 100!, as the number is big, I am not able to get that.
With ⍎¨⍕(!100) I am getting a syntax error: ⍎SYNTAX ERROR
Is there any other way to solve the problem or can you suggest me some modifications?
!100 is a large number and when you format it's result you'll get a string representing a number in E notation.
⍕!100 → '9.332621544E157', when you attempted to eval (⍎) each character you ran into a syntax error since E has no meaning.
There are two ways to split a large integer into it's digits:
Firstly with inverse decode, examples can be found on the APLcart
10⊥⍣¯1!100
This is vulnerable to floating point imprecision, however.
The second and preferred option is using big from the dfns library, which can be imported using the quad function CY.
'big'⎕CY'dfns'
Examples here
And thankfully the last example covers your exact case! Factorial 100 is ↑×big/⍳100
The final solution to the problem could look like this:
+/⍎¨↑×big/⍳100

Converting a Gray-Scale Array to a FloatingPoint-Array

I am trying to read a .tif-file in julia as a Floating Point Array. With the FileIO & ImageMagick-Package I am able to do this, but the Array that I get is of the Type Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}.
I can convert this FixedPoint-Array to Float32-Array by multiplying it with 255 (because UInt8), but I am looking for a function to do this for any type of FixedPointNumber (i.e. reinterpret() or convert()).
using FileIO
# Load the tif
obj = load("test.tif");
typeof(obj)
# Convert to Float32-Array
objNew = real.(obj) .* 255
typeof(objNew)
The output is
julia> using FileIO
julia> obj = load("test.tif");
julia> typeof(obj)
Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
julia> objNew = real.(obj) .* 255;
julia> typeof(objNew)
Array{Float32,2}
I have been looking in the docs quite a while and have not found the function with which to convert a given FixedPoint-Array to a FloatingPont-Array without multiplying it with the maximum value of the Integer type.
Thanks for any help.
edit:
I made a small gist to see if the solution by Michael works, and it does. Thanks!
Note:I don't know why, but the real.(obj) .* 255-code does not work (see the gist).
Why not just Float32.()?
using ColorTypes
a = Gray.(convert.(Normed{UInt8,8}, rand(5,6)));
typeof(a)
#Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
Float32.(a)
The short answer is indeed the one given by Michael, just use Float32.(a) (for grayscale). Another alternative is channelview(a), which generally performs channel separation thus also stripping the color information from the array. In the latter case you won't get a Float32 array, because your image is stored with 8 bits per pixel, instead you'll get an N0f8 (= FixedPointNumbers.Normed{UInt8,8}). You can read about those numbers here.
Your instinct to multiply by 255 is natural, given how other image-processing frameworks work, but Julia has made some effort to be consistent about "meaning" in ways that are worth taking a moment to think about. For example, in another programming language just changing the numerical precision of an array:
img = uint8(255*rand(10, 10, 3)); % an 8-bit per color channel image
figure; image(img)
imgd = double(img); % convert to double-precision, but don't change the values
figure; image(imgd)
produces the following surprising result:
That second "all white" image represents saturation. In this other language, "5" means two completely different things depending on whether it's stored in memory as a UInt8 vs a Float64. I think it's fair to say that under any normal circumstances, a user of a numerical library would call this a bug, and a very serious one at that, yet somehow many of us have grown to accept this in the context of image processing.
These new types arise because in Julia we've gone to the effort to implement new numerical types (FixedPointNumbers) that act like fractional values (e.g., between 0 and 1) but are stored internally with the same bit pattern as the "corresponding" UInt8 (the one you get by multiplying by 255). This allows us to work with 8-bit data and yet allow values to always be interpreted on a consistent scale (0.0=black, 1.0=white).

Convert HEX to characters using bitwise operations

Say I've got this value xxx in hex 007800780078
How can I convert back the hex value to characters using bitwise operations?
Can I?
I suppose you could do it using "bitwise" operations, but it'd probably be a horrendous mess of code as well as being totally unnecessary since ILE RPG can do it easily using appropriate built-in functions.
First is that you don't exactly have what's usually thought of as a "hex" value. That is, you're showing a hexadecimal representation of a value; but basic "hex" conversion will not give a useful result. What you're showing seems to be a UCS-2 value for "xxx".
Here's a trivial example that shows a conversion of that hexadecimal string into a standard character value:
d ds
d charField 6 inz( x'007800780078' )
d UCSField1 3c overlay( charField )
d TargetField s 6
d Length s 10i 0
/free
Length = %len( %trim( UCSField1 ));
TargetField = %trim( %char( UCSField1 ));
*inlr = *on;
return;
/end-free
The code has a DS that includes two sub-fields. The first is a simple character field that declares six bytes of memory initialized to x'007800780078'. The second sub-field is declared as data type 'C' to indicate UCS-2, and it overlays the first sub-field. Because it's UCS-2, its size is given as "3" to allow for three characters. (Each character is 16-bits wide.)
The executable statements don't do much, just enough to let you test the converted values. Using debug, you should see that Length comes out to be (3) and TargetField becomes 'xxx'.
The %CHAR() built-in function can be used to convert from UCS-2 to the character encoding used by the program. To go in the opposite direction, use the %UCS2() built-in function.

calculate binary expression - convert string to binary data

I get a string like this: "000AND111"
I need to calculate this and return the result.
How I do it in Flex?
just see this post thanks to the pingback by #powerlljf3
I would suggest a 3 phases approach.
1- write a small parser that split up the string in meaningful tokens (numbers and operands). Since Operands are all litterals and numbers are 0/1 combination, the parser is pretty easy (the grammer is LL1), so regular expressions can be really do the work here.
2- after building up the sequency of tokens and what is tecnically call the parsed expression tree (the sequency of tokens and operands), just implements any operand with the specific function (the link to my blog, works for few of the common boolean algebra operands)
3- finally just start reading tokens from left to right, and apply function where operands are found.
I would look through this http://www.nicolabortignon.com/as3-bitwise-operations/. It has many examples of binary math that can be used in AS3.

AS3 adding 1 (+1) not working on string cast to Number?

just learning as3 for flex. i am trying to do this:
var someNumber:String = "10150125903517628"; //this is the actual number i noticed the issue with
var result:String = String(Number(someNumber) + 1);
I've tried different ways of putting the expression together and no matter what i seem to do the result is always equal to 10150125903517628 rather than 10150125903517629
Anyone have any ideas??! thanks!
All numbers in JavaScript/ActionScript are effectively double-precision IEEE-754 floats. These use a 64-bit binary number to represent your decimal, and have a precision of roughly 16 or 17 decimal digits.
You've run up against the limit of that format with your 17-digit number. The internal binary representation of 10150125903517628 is no different to that of 10150125903517629 which is why you're not seeing any difference when you add 1.
If, however, you add 2 then you will (should?) see the result as 10150125903517630 because that's enough of a "step" that the internal binary representation will change.

Resources