How to count bits using Marie assembly language? - counting

I'm currently working a project where I use MARIE Assembly Language to create a version of Hamming code. My initial work has been to first have the user input the 8 bits for the data bits, and to then have the program output the correct 12-bit code word.
In order to find the correct values for the parity bits, I thought that the easiest way might be to load each data bit, which corresponds to a certain parity bit, and to count the number of 1s. However, I have yet to have found a way to have Marie count bits. I know there is no direct instruction to make Marie count bits, but does anyone have any advice on which instruction(s) might lead me towards an answer? I apologize in advance if clarity is lacking, and if it is please let me know so I can edit my question accordingly.

Related

how to find out only one digit in a int number?

Im new so if this question was already Asked (i didnt find it scrolling through the list of results though) please send me the link.
I got a math quiz and im to lazy to go through all the possibilities so i thought i can find a program instead. I know a bit about programming but not much.
Is it possible (and in what programming language, and how) to read only one digit, e.g at the 3rd Position, in a integer?
And how is an integer actually saved, in a kind of array?
Thanks!
You can get rid of any lower valued digit (the ones and tens if you only want the hundreds) by dividing with rounding/truncation. 1234/100 is 12 in most languages if you are doing integer division.
You can get rid of any higher valued digits by using taking the modulus. 12 % 10 is 2 in many languages; just find out how the modulus is done in yours. I use "modulus" meaning "divide and keep the rest only", i.e. it is the opposite of "divide with rounding"; that which is lost by rounding is the final result of the modulus.
The alternative is however to actually NOT see the input as a number and treat it as text. That way it is often easier to ignore the last 2 characters and all leading characters.

Human readable alternative for UUIDs

I am working on a system that makes heavy use of pseudonyms to make privacy-critical data available to researchers. These pseudonyms should have the following properties:
They should not contain any information (e.g. time of creation, relation to other pseudonyms, encoded data, …).
It should be easy to create unique pseudonyms.
They should be human readable. That means they should be easy for humans to compare, copy, and understand when read out aloud.
My first idea was to use UUID4. They are quite good on (1) and (2), but not so much on (3).
An variant is to encode UUIDs with a wider alphabet, resulting in shorter strings (see for example shortuuid). But I am not sure whether this actually improves readability.
Another approach I am currently looking into is a paper from 2005 titled "An optimal code for patient identifiers" which aims to tackle exactly my problem. The algorithm described there creates 8-character pseudonyms with 30 bits of entropy. I would prefer to use a more widely reviewed standard though.
Then there is also the git approach: only display the first few characters of the actual pseudonym. But this would mean that a pseudonym could lose its uniqueness after some time.
So my question is: Is there any widely-used standard for human-readable unique ids?
Not aware of any widely-used standard for this. Here’s a non-widely-used one:
Proquints
https://arxiv.org/html/0901.4016
https://github.com/dsw/proquint
A UUID4 (128 bit) would be converted into 8 proquints. If that’s too much, you can take the last 64 bits of the UUID4 (= just take 64 random bits). This doesn’t make it magically lose uniqueness; only increases the likelihood of collisions, which was non-zero to begin with, and which you can estimate mathematically to decide if it’s still OK for your purposes.
Here you go UUID Readable
Generate Easy to Remember, Readable UUIDs, that are Shakespearean and Grammatically Correct Sentences
This article suggests to use the first few characters from a SHA-256 hash, similarly to what git does. UUIDs are typically based on SHA-1, so this is not all that different. The tradeoff between property (2) and (3) is in the number of characters.
With d being the number of digits, you get 2 ** (4 * d) identifiers in total, but the first collision is expected to happen after 2 ** (2 * d).
The big question is really not about the kind of identifier you use, it is how you handle collisions.

Driving Dual 7 Segment Display Using Arduino

Okay, so I am trying to drive a 7 segment based display in order to display temperature in degrees celcius. So, I have two displays, plus one extra LED to indicate positive and negative numbers.
My problem lies in the software. I have to find some way of driving these displays, which means converting a given integer into the relevant voltages on the pins, which means that for each of the two displays I need to know the number of tens and number of 1s in the integer.
So far, what I have come up with will not be very nice for an arduino as it relies on division.
tens = numberToDisplay / 10;
ones = numberToDisplay % 10;
I have admittedly not tested this yet, but I think I can assume that for a microcontroller with limited division capabilities this is not an optimal solution.
I have wracked my brain and looked around for a solution using addition/subtraction/bitwise but I cannot think of one at all. This division is the only one I can see.
For this application it's fine. You don't need to get bothered with performance in a simple thermometer.
If however you do need something quicker than division and modulo, then bitwise operations come to help. Basically you would use bitwise & operator, to compare your value to display with patterns describing digits to be displayed on the display.
See the project here for example: http://fritzing.org/projects/2-digit-7-segment-0-99-counting-with-arduino/
You might also try using a 7-seg display driver chip to simplify your output and save pins. The MC14511BCP (a "4511") is a good one. It'll translate binary coded decimal (BCD) to the appropriate 7-seg configuration. Spec sheets are available here and they can be commonly found at electronics parts stores online.

Control sum in network connection

i'm making network application which doesn't send good data every time (most of time they are broken) so i tought to make control sum. At the end of data i will add control sum to check if they are valid. So i'm not sure is that a good idea to multiply every data (they are from 1 to 100) by 100, 100^2, 100^3..., and sum them.
Do you have any suggestion what to do, without making really big number(there are many data in the every packet).
Example:
Data: 1,4,2,77,12,32,5,52,23
My solution:1,4,2,77,12,32,5,52,23, 100+40000+2000000+ 77*10^4 ...
When client receive the packet he will check if last data is equal to sum of other datas.
Is there any better solution?
Multiplying the data results in a very large number to transmit, and not a lot of confidence that the numbers are correct. And addition runs into potential overflow issues. That is why it is customary to use an xor.
Or you can read up on http://en.wikipedia.org/wiki/Error-correcting_code to get even fancier solutions that can detect, and sometimes correct, small numbers of errors.
Best explanation here:
http://www.textfiles.com/programming/crc.txt
CRC functions will be available in you language's networking library.
Because 128 is 10000000 in binary, there is only 1 bit for subnetting, and there are 7 bits for hosts. We're going to subneting the Class C network address 192.168.10.0.
192.168.10.0 = Network address
255.255.255.128= Subnet mask

Maximizing Stored Information (Entropy?)

So I'm not sure if this question belongs here or maybe Math overflow. In any case, my question is about information theory.
Let's say I have a 16 bit word. There are 65,536 unique configurations of 1's and 0's in that number. What each one of those configurations represents is unimportant as depending on your notation (2's complement vs signed magnitude etc.) the same configuration can mean different things.
What I'm wondering is are there any techniques to store more information than that in a 16 bit word?
My original ideas were like odd/even parity or something but then I realized that's already determined by the configuration... i.e. there is no extra information encoded in that. I'm beginning to wonder if no such thing exists.
EDIT For example, let's say some magical computer (thinking quantum or something here) could understand 0,1,a. Then obviously we have 3^16 configurations and can now store more than the numbers [0 - 65,536]. Are there any other properties of a 16 bit word that you can mess with in order to encode extra information in your bit stream?
EDIT2 I am really struggling to put this into words. Right now when I look at a 16 bit word in the computer, the property which conveys information to me the relative ordering of individual 1's and 0's. Is there another property or way of looking at a 16 bit word which would allow more than 2^16 unique "configurations"? (Note it would no longer be a configuration, but 2^16 xxxx's where xxxx is a noun describing an instance of that property). The only thing I can really think of is something like if we looked at the number of 1 to 0 transitions or something rather than whether each bit was actually a 1 or 0? Now transitions does not yield more than 2^16 combinations because it is ultimately solely dependent on the configuration of 1's and 0's. I'm looking for properties that would derive from the configuration of 1's and 0's AND something else thus resulting in MORE than 2^16. Does anyone even know what this would be called if it did exist?
EDIT3 Ok I got it. My question boils down to this: How do we prove that the configuration of 1's and 0's in a word completely defines it? I.E. How do we prove that you need no other information besides the bitmap to show equality between two 16 bit words?
FINAL EDIT
I have an example... If instead of looking at the presence of 1's and 0's we look at transition between bits we can store 2^16 alphabet characters. If the bit to left is the same, treat it as a 1, if it transitions, treat it as a 0. Using the 16 bit word as a circularly linked list type structure where each link represent 0/1 we basically for a 16 bit word out of the transition between bits. That is an exact example of what I was looking for but that results in 2^16, nothing better. I am convinced that you cannot do better and am marking the correct answer =(
The amount of information in a particular configuration of 16 0/1s is determined by the probability of this configuration (this is called self-information). This can be bigger than 16 bits if the configuration is less likely than 1/(2^16), but that means that some other configurations are more likely than 1/(2^16) and so will contain less information than 16 bits.
To take into account all the possible configurations, you have to use the expected value of self-information (called entropy) of individual configurations. This value will reach its maximum when the probabilities of all configurations are equal (that is 1/(2^16)) and then it will be exactly 16 bits.
So the answer is no, you cannot store more than 16 bits of information in 16 0/1s.
See
http://en.wikipedia.org/wiki/Information_theory
http://en.wikipedia.org/wiki/Self-information
EDIT It is important to realize that bit does not stand for 0 or 1, but it is a unit of information, that is -log_2 P(w) where P(w) is the probability of a particular configuration.
You cannot store more than 2 states in one digit of a semiconductor device. You answered it yourself. The only way more information can be fitted into 16 digits is if each digit were to have many possible values.

Resources