Octal to Decimal converting with VB 6 - math

I have heard about octal number system lately and i wanna learn about it.
My dumb teacher that i asked for it. Told me "its no more used, u dont need to learn" but no sire im pretty sure its still in use so i need to know!
If there is someone who can explain me the octal number system and show me a way to convert it to Decimal(number system that we use in life) "it would help me to learn about it a lot" and where i can use it in life so i can show smt to that dumb teacher that he is wrong, that he must do his job on teaching..
i wanna do it on vb6, cause my teacher works on vb6 usually.

You can get more about Octal from Wiki - http://en.wikipedia.org/wiki/Octal
The octal, or base 8, number system is a common system used with computers. Because of its relationship with the binary system, it is useful in programming some types of computers.
Decimal, hexadecimal, and octal representations are straightforward. To read a string in these formats, use CLng.
Dim value As Long
value = CLng(Text1.Text)
Hexadecimal strings should begin with &H and octal strings should begin with &O.
To convert a value into a decimal, hexadecimal, or octal string representation, use Format$, Hex$, and Oct$ respectively. For example, Oct$(256) returns the octal representation of the value 256 (which is "400").

Related

Recognising hash/checksum

Hello everyone,
In one of projects I've found that data encrypts with algorithm I have never meet before. It converts input string to 10-symbol numeric string. It's definitely not one of popular hashes (md5, sha1, etc) or check sums (crc16, crc32, etc), I've checked all I known.
Example:
Input string: "davidc"; Output string: "2172453193".
Length of output string is fixed - 10 symbols. It contains only digits ("0123456789"). This is all details i can add =/
Maybe someone met something similar to that or maybe know algorithm - You'll economy lot of my time.
With love <3

Is 192.056.2.01 a valid representation of an v4 ip?

I'm writing some code to convert an v4 ip stored in a string to a custom data type (a class with 4 integers in this case).
I was wondering if I should accept ips like the one I put in the title or only ips wiht no preceding zeros, let's see it with an example.
This two ips represent the same to us (humans) and for example windows network configuration accepts them:
192.56.2.1 and 192.056.2.01
But I was wondering if the second one is actually correct or not.
I mean, according to the RFC is the second ip valid?.
Thanks in advance.
Be careful, inet_addr(3) is one of Unix's standard API to translate a textual representation of IPv4 address into an internal representation, and it interprets 056 as an octal number:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/inet_addr.html
All numbers supplied as parts in IPv4 dotted decimal notation may be decimal, octal, or hexadecimal, as specified in the ISO C standard (that is, a leading 0x or 0X implies hexadecimal; otherwise, a leading '0' implies octal; otherwise, the number is interpreted as decimal).
Its younger brothers like inet_ntop(3) and getaddrinfo(3) are all the same:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/inet_ntop.html
http://pubs.opengroup.org/onlinepubs/9699919799/functions/getaddrinfo.html
Summary
Although such textual representations of IP addresses like 192.056.2.01 might be valid on all platforms, different OS interpret them differently.
This would be enough reason for me to avoid such a way of textual representation.
Pros
In decimal numerotation 056 is equals to 56 so why not?
Cons
0XX format is commonly used to octal numerotation
Whatever your decisions just put it on your documentation and it will be ok :)
Defining if it is correct or not depends on your implementation.
As you mentioned windows OS considers it correct because it removes any leading zeros when it resolves the IP.
So if in your program you set an appropriate logic, e.g every subset of the IP stored in your 4 integer class, without the leading zeros, it will be correct for your case too.
Textual Representation of IPv4 and IPv6 Addresses is an “Internet-Draft”,
which, I guess, is like an RFC wanna-be. 
(Also, it expired a decade ago, on 2005-08-23,
and, apparently, has not been reissued,
so it’s not even close to being official.) 
Anyway, in Section 2: History it says,
The original IPv4 “dotted octet” format was never fully defined in any RFC,
so it is necessary to look at usage,
rather than merely find an authoritative definition,
to determine what the effective syntax was. 
The first mention of dotted octets in the RFC series is …
four dot-separated parts, each of which consists of
“three digits representing an integer value in the range 0 through 255”.
A few months later, [[IPV4-NUMB][3]] …
used dotted decimal format, zero-filling each encoded octet to three digits.
                ⋮
Meanwhile,
a very popular implementation of IP networking went off in its own direction. 
4.2BSD introduced a function inet_aton(), …
[which] allowed octal and hexadecimal in addition to decimal,
distinguishing these radices by using the C language syntax
involving a prefix “0” or “0x”, and allowed the numbers to be arbitrarily long.
The 4.2BSD inet_aton() has been widely copied and imitated,
and so is a de facto standard
for the textual representation of IPv4 addresses. 
Nevertheless, these alternative syntaxes have now fallen out of use …
[and] All the forms except for decimal octets are seen as non-standard
(despite being quite widely interoperable) and undesirable.
So, even though [POSIX defines the behavior of inet_addr][4]
to interpret leading zero as octal (and leading “0x” as hex),
it may be safest to avoid it.
P.S. [RFC 790][3] has been obsoleted by [RFC 1700][5],
which uses decimal numbers of one, two, or three digits,
without leading zeroes.
[3]: https://www.rfc-editor.org/rfc/rfc790 "the "Assigned Numbers" RFC"
[4]: http://pubs.opengroup.org/onlinepubs/9699919799/functions/inet_addr.html
[5]: https://www.rfc-editor.org/rfc/rfc1700

What is the use of hexadecimal values in programming?

This is something I have been thinking while reading programming books and in computer science class at school where we learned how to convert decimal values into hexadecimal.
Can someone please tell me what are the advantages of using hexadecimal values and why we use them in programmnig?
Thank you.
In many cases (like e.g. bit masks) you need to use binary, but binary is hard to read because of its length. Since hexadecimal values can be much easier translated to/from binary than decimals, you could look at hex values as kind of shorthand notation for binary values.
It certainly depends on what you're doing.
It comes as an extension of base 2, which you probably are familiar with as essential to computing.
Check this out for a good discussion of
several applications...
https://softwareengineering.stackexchange.com/questions/170440/why-use-other-number-bases-when-programming/
The hexadecimal digit corresponds 1:1 to a given pattern of 4 bits. With experience, you can map them from memory. E.g. 0x8 = 1000, 0xF = 1111, correspondingly, 0x8F = 10001111.
This is a convenient shorthand where the bit patterns do matter, e.g. in bit maps or when working with i/o ports. To visualize the bit pattern for 169d is in comparison more difficult.
A byte consists of 8 binary digits and is the smallest piece of data that computers normally work with. All other variables a computer works with are constructed from bytes. For example; a single character can be stored in a single byte, and a 32bit integer consists of 4 bytes.
As bytes are so fundamental we want a way to write down their value as neatly and efficiently as possible. One option would be to use binary, but then we would need a lot of digits. This takes up a lot of space and can be confusing when many numbers are written in sequence:
200 201 202 == 11001000 11001001 11001010
Using hexadecimal notation, we can write every byte using just two digits:
200 == C8
Also, as 16 is a power of 2, it is easy to convert between hexadecimal and binary representations in your head. This is useful as sometimes we are only interested in a single bit within the byte. As a simple example, if the first digit of a hexadecimal representation is 0 we know that the first four binary digits are 0.

should I use utf-8 or utf-16 or utf-32 for my multilingual cms?

Besides the difference in how characters are stored, are there any special characters in any language utf-32 can display and utf-8 cannot?
All UTF encodings can represent the same range of code points (0 to 0x10FFFF). So, the same characters can be encoded by any of them.
Whether they can be "displayed" is an entirely different question. That's nothing to do with the encoding, and a function of the font family used. I am not sure that any font has glyphs for every single Unicode code point. But I assume you meant "represented".
They do vary in how many bytes they'll need to represent a given string. UTF-8 is almost always the shortest for non-Asian languages. For those, UTF-16 might win (I haven't really "benchmarked".) I can't imagine a realistic case where UTF-32 would be optimal.
Is there any character one of them can't represent?
In theory: No.
All of those formats can represent all Unicode code points.
In practice: Depends.
The Windows API uses UCS-2 (which is pretty much the first UTF-16 chunk) and doesn't always handle surrogates correctly. So you might want to use UTF-16 to have your program act as "normal" as possible compared to other programs, instead of truncating high-ranging UTF-32 code points manually.
Anything else?
Yes: Use UTF-8!
It's endian-less, so you it avoids byte-order issues, which are a pain in the rear.
Of course, if you're on Windows then you need to convert to UTF-16 before using them.
UTF-8, UTF-16 and UTF-32 all can be used to represent all Unicode datapoints. So no, there are no special characters that can be represented in UTF-32 and not in UTF-8.
1) UTF-8 can be backward compatible with ASCII for regular english characters, this can be an advantage when your client just have english characters.
2) UTF-8 is good in saving network bandwidth if you have ASCII characters more than non-English characters.
3) UTF-16 would be good if you have more non-English characters in terms of saving Storage space.
I suggest to use UTF-8 based on #1 above.

Assembler memory address representation

I'm trying to get into assembler and I often come across numbers in the following form:
org 7c00h
; initialize the stack:
mov ax, 07c0h
mov ss, ax
mov sp, 03feh ; top of the stack.
7c00h, 07c0h, 03feh - What is the name of this number notation? What do they mean? Why are they used over "normal" decimal numbers?
It's hexadecimal, the numeral system with 16 digits 0-9 and A-F. Memory addresses are given in hex, because it's shorter, easier to read, and the numbers that represent memory locations don't mean anything special to humans, so no sense to have long numbers. I would guess that somewhere in the past someone had to type in some addresses by hand as well, might as well have started there.
Worth noting also, 0:7C00 is the boot sector load address.
Further worth noting: 07C0:03FE is the same address as 0:7FFE due to the way segmented addressing works.
This guy's left himself a 510 byte stack (he made the very typical off-by-two error in setting up the boot sector's stack).
These are numbers in hexadecimal notation, i.e. in base 16, where A to F have the digit values 10 to 15.
One advantage is that there is a more direct conversion to binary numbers. With a little bit of practice it is easy to see which bits in the number are 1 and which are 0.
Another is is that many numbers used internally, such as memory addresses, are round numbers in hexadecimal, i.e. contain a lot of zeros.

Resources