Motorola 68k: Understanding the status registrer flag states - motorola

I'm having trouble understanding how exactly the Status Register (SR) content works.
Let's say that the content of (SR) = $0300. How would I figure out in which states the flags are?
Of course that would also answer the question, if the flags are in [insert states here], (SR) = $????

Convert the SR contents to binary, write it next to the boxes:
Most of the bits are just flags and signal a yes/no condition, except the Interrupt Priority Mask, which actually is a number between 0 and 7.
$0300 = 0b0000001100000000
T0S003000XNZVC
meaning
No Trace mode
No Supervisor mode
Interrupt priority level: 3
No extent, negative, zero, overflow or carry condition

Related

scanning multiple serial data bits reliably - 8051

My hardware currently has four sets of sensors that I treat as four separate serial ports with receive functionality only enabled wired to the lower 4 bits of port 0. I have attempted numerous times to retrieve the correct serial port data (by aiming the lazer direct at the sensor) without success. I then researched that for more reliability, on a standard UART, each bit is sampled at 16x a second (I found this 3/4 down the page on https://www.allaboutcircuits.com/technical-articles/back-to-basics-the-universal-asynchronous-receiver-transmitter-uart/).
So I ended up rolling off my own version of that but due to my timings, my count is more like 32x a second, but that's ok.
I'm going to explain what I did first so everyone understands what is going on.
code explanation
I have four consecutive address locations setup to point to values of counters for each bit. Four bits are read simultaneously from hardware and a counter for that bit goes up or down based on whether that bit is set (light detected on that group of sensors) or clear (light not detected). This loop executes frequently at about a 9600bps speed.
The second loop only executes when a value is needed. This happens once every 16 times that the last loop executes (more like at a 600bps speed). It takes the counter value of each bit as if it was a signed number and uses the MSB value as the final value of that bit. Those MSB values get crammed together to form the official bit read from the sensors.
Is this approach OK to reliably determine whether the bit value is set or cleared?
And could I somehow redo this code so the processes run faster? because each loop consumes a large number of clock cycles (32 to 40) and if I can get it down to maybe 20 clock cycles, I'd be happy.
Also, this code is executed on AT89S52 microcontroller so I'm using its extended memory addresses.
the code
;memory is preinitialized to nulls
LAZMAJ equ 0E0h ;majority counters start address (end address at 0E4h)
MAJT equ 20h ;Majority value at bit address
mov A,P0 ;get bit values from hardware
mov R1,#LAZMAJ ;go to start of pointer
;loop uses 40 clock cycles out of 192 available
countmaj:
rrc A ;get bit
jnc noincmaj
inc #R1 ;bit is set so add 1 to counter for that bit
noincmaj:
jc incmaj
dec #R1 ;bit is clear so subtract 1 from counter for that bit
incmaj:
inc R1 ;move pointer to next bit
cjne R1,#LAZMAJ+4,countmaj ;see if pointer is out of range
;it is so end loop
;loop uses about 32 clock cycles and executes when we want data
mov R1,#LAZMAJ+4 ;go to out of range position
chkmaj:
dec R1 ;decrement pointer first so we are within range
mov MAJT,#R1 ;load value to majority variable. treat it as signed
mov #R1,#0h ;clear value from memory space
mov C,MAJT.7 ;Take sign and use that as carry
rlc A ;and put it into our final variable
cjne R1,#LAZMAJ,chkmaj ;if pointer isn't in first address then keep going
;otherwise exit loop and A=value we want

1's complement checksum even bit errors

I am trying to wrap my head around 1's complement checksum error detection as is used in UDP.
My understanding with simplified example for an UDP-like 1's complement checksum error checking algorithm operating on 8 bit words (I know UDP uses 16 bit words):
Sum all 8 bit words of data, carry the MSB rollover to the LSB.
Take 1's complement of this sum, set checksum, send datagram
Receiver adds with carry rollover all received 8 bit words of data in the incoming datagram, adds checksum.
If sum = 0xFF, no errors. Else, error occurred, throw away packet.
It is obvious that this algorithm can detect 1 bit errors and by extension any odd-numbered bit errors. If just one bit in an 8-bit data word is corrupted, the sum + checksum will never equal 0xFF. A plain and simple example would be A = 00000000, B = 00000001, then ~(A + B) = 11111110. If A(receiver) = 00000001, B(reciever) = 00000001, the sum + checksum would be 0x00 != 0xFF
My question is:
It's not too clear to me if this can detect 2 bit errors. My intuition says no, and a simple example is taking A = 00000001, B = 00000000, then sum + checksum would be 0xFF, but there are two total errors in A and B from sender to receiver. If the 2 bit error occurred in the same word, theres a chance it could be detected, but it doesn't seem guaranteed.
How robust is UDP error checking? Does it work for even numbers of bit errors?
Some even-bit changes can be detected, some can't.
Any error that changes the sum will be detected. So a 2-bit error that changes the sum will be detected, but a 2-bit error that does not change the sum will not be detected.
A 2-bit error in a single word (single byte in your simplified example) will change the value of that word, which will change the sum, and therefore will always be detected. Most 2-bit errors across different words will be detected, but a 2-bit error that changes the same bit in different directions (one 0->1, the other 1->0) in different words will not change the sum -- the change in value created by one of the changed bits will be cancelled out by the equal-but-opposite change in value created by the other changed bit -- and therefore that error will not be detected.
Because this checksum is simply an addition, it will also fail to detect the insertion or removal of words whose arithmetic value is zero (and since this is a one's complement computation, that means words whose content is all 0s or all 1s).
It will also fail to detect transpositions of words, (because a+b gives the same sum as b+a), or more generally it will fail to detect errors that add the same amount to one word as they subtract from the other (because a+b gives the same sum as (a+n)+(b-n), e.g. 3+3=4+2=5+1). You could consider the transposition and cancelling-error cases to be made up of multiple pairs of same-bit changes.

ARMv8 Foundation Model: switches and leds

I am trying to boot my small ARMv7 kernel (which runs just fine using qemu vexpress model) in ARMv8 Foundation Model v2.1. The model boots at level EL3 / 64 bits, and I managed to go down to level EL1 / 32 bits, but I encounter some issues (in a few words, the timer doesn't tick and some kprintf are missing, but that's not the issue here).
To debug my UART issue, I wanted to use the led / switches provides by the model. I can read their value from software quite easily, but I can't write a new value to either of them. The kernel seems to hang. Here is a minimal asm code that writes to the switches register:
.global Start
Start:
# we are in EL3 / 64 bits mode
# create the 0x1C010000 + 0x4 address of switches
mov x0, #4
movk x0, #0x1c01, lsl #16
# value to write
mov w1, #0xaa
# actual writing
strb w1, [x0]
It seems I am stuck at the strb instruction. For the record, if I replace strb with ldrb, I can correctly read and display the value of this register (I played with the --switches flag to be sure it worked).
Any one knows what I am doing wrong here ?
EDIT: thanks to unixsmurf suggestions, I know now that I got an synchronous Data Abort Exception with no level change, and that the reason is "Synchronous External Abort". I don't know how to inspect further, I guess I'll try ARM's forum.
Best,
V.
The ARM community finally solved the problem. The complete discussion can be found here.

half-carry/half-borrow flag in DAA instruction

Apologies for making this my second Z80 DAA question - I have pretty much implemented this instruction now, but there is one thing I'm not sure about - is the H flag set by this instruction at all? The Z80 manual says 'see instruction', but it only mentions the flag before DAA, not after it is executed.
I set the flags as follows:
S is set if result is negative (0x80 & result equals 0x80)
Z is set if result is zero
H (not sure hence this question)
P/V is set to the parity of the result (1 if even, 0 if odd)
N is left alone
C is set if the higher nibble of the original accumulator value is modified
Other than this, the instruction seems to perform as I expect it to :-) I hope someone can clear this up for me, many thanks.
I could only find here that the half-carry/borrow flag is modified by DAA.
I recommend that this flag be set exactly as the AF (auxiliary carry) flag is set by the DAA and DAS instructions on x86 CPUs. I see no reason why there should be any difference in operation between i8080/i8085/Z80's and i8086's DAA/DAS.
The x86 DAA/DAS sets AF to 1 if it adjusts the lowest 4 bits of the accumulator by 6. If it does not adjust them, it resets AF to 0.
See the pseudo-code for DAA and DAS in the intel's (or AMD's) x86 CPU manuals.
It's a good question. Yes, H flag's behaviour is not clearly documented because it is behaviour is non-standard with DAA.
If lower nibble (least significant four bits) of A is a non base-10 number (greater than 9 like A,B,C,D,E or F) or H flag is set, 6 is added to the register. This means even if lower nibble is in 0-9 range, you can force to add 6 to A register by setting H flag.
When it comes to your question, H flag usually remains untouched in my experience but you cannot depend on that because it is said that "the effect is non-standard" which means H flag may change or may not change depending on the situation. In cases like this, you should always think H flag is affected by the DAA instruction after execution even if you see it is not affected in your tests.

parity bit question

i have been readin about the "parity bit" method, and how is is used to check is the "packet" is received correctly.
so using odd parity: (from wiki)
A wants to transmit: 1001
A computes parity bit value: ~(1^0^0^1) = 1
A adds parity bit and sends: 10011
B receives: 10011
B computes overall parity: 1^0^0^1^1 = 1
B reports correct transmission after observing expected odd result.
what if during the transmission, instead of "10011",
"11001" is received. how will the parity check for that, since it checks only the number of "1"'s ?
or is it impossible for bits to change during transmission like i stated before? thx
Parity bit is simplest error detection technique. It works if odd number of bits (including the parity bit) are transmitted incorrectly. So if two bits are corrupt then it will not work.

Resources