serial port output from 8051 micro is incomplete and reversed - serial-port

I'm trying to make a program in the 8051 machine language that allows me to send data from its ROM to the PC in 16-byte chunks asynchronously so that it can do other work at the same time. In this code, I was expecting the following output on my screen:
Central station Debug Mode
Testing 115200bps speed mode with 16 byte chunks
12345678901234567890123456789012345678901234567890123456789012345678901234567890
80x25 screen?
uC to Linux screen test
Working? Yes? Maybe? No?
Test 100% wireless. 115200bps speed
TEST IS NOW FINISHED!
TX TEST OVER WAITING FOR RECEPTION!
However, I received much less than that. The output I actually received (according to the screen command in Linux) is the following:
OITPECER ROF
!H
I look at that text and it makes me think it processed the last 16 bytes whilst skipping one along with the rest of the beginning bytes.
I got both the microcontroller and the PC that I'm testing with both set to 115200 bps in raw mode with 8 data bits, no parity and 1 stop bit, and the power to both devices remained connected during the tests so I can't blame lack of power.
Is there anything I can do to this code to improve it so all text can be outputted correctly?
TXBUFS equ 30h
RXBUFS equ 40h
TXOP bit 7Dh
RXOP bit 7Ch
org 0h
ljmp main
org 23h ;serial INT
push PSW
setb RS0 ;all serial functions use bank 1.
jbc RI,rxdf
jbc TI,txdf
exitint:
clr RS0
pop PSW
reti
main:
mov SP,#52h
lcall serialinit
ljmp startdebug
;Serial port receive function
;Receiver controls R1 value unless we want to reset local buffer
rxdf:
cjne R1,#RXBUFS+17,noerx
clr RXOP
ajmp exitint
noerx:
mov #R1,SBUF
inc R1
ajmp exitint
;Serial port transmit function
txdf:
cjne R0,#TXBUFS+17,noetx
clr TXOP
ajmp exitint
noetx:
mov SBUF,#R0
inc R0
ajmp exitint
;enable receive
startrx:
setb RS0
mov R1,#RXBUFS
setb RXOP
clr RS0
ret
;Reset receiver stack
resetget:
setb RS0
mov R0,#RXBUFS+17
clr RS0
ret
;Get byte as A from receiver stack.
getrx:
setb RS0
dec R0
mov A,#R0
clr RS0
ret
;Reset our stack pointer (to zero bytes on transmitter stack)
resetadd:
setb RS0
mov R0,#TXBUFS+17
clr RS0
ret
;Add byte from A onto our transmitter stack. C=1 if end of stack reached
addtx:
setb RS0
dec R0
mov #R0,A
cjne R0,#TXBUFS,notxendp
setb C
inc R0
clr RS0
ret
notxendp:
clr C
clr RS0
ret
;send data
starttx:
setb TXOP ;set transmit busy flag
setb RS0
mov SBUF,#R0 ;send 1st byte right away to trigger interrupt later
inc R0
clr RS0
ret
;serial init
serialinit:
mov TH1,#0FFh ;speed=115kbps
mov TL1,TH1
mov PCON,#80h
mov SCON,#50h
mov TMOD,#22h ;auto reload for timers
setb RS0
mov R0,#TXBUFS+17 ;setup our pointers.
mov R1,#RXBUFS
clr RS0
setb TR1
setb ES
setb EA ;enable interrupts
ret
startdebug:
mov DPTR,#mmenu ;load text
lcall dprint ;and print it (not working correctly)
sjmp $
;print what's in ROM
dprint:
lcall resetadd ;reset transmitter pointer
dprint2:
clr A
movc A,#A+DPTR
inc DPTR
jz dexitp ;exit function if character from local ROM is 0h
lcall addtx ;send byte to transmitter
jnc nostall
;at this point, we made buffer full so do 16-byte transmission
lcall starttx
;and stall until all 16 bytes are transmitted
txst:
nop
jb TXOP,txst
;reset transmitter pointer again for next 16 bytes
lcall resetadd
nostall:
sjmp dprint2
dexitp:
;do one more round of transmission for remaining characters in transmit buffer
lcall starttx
;and stall again
txst2:
nop
jb TXOP,txst2
;and reset
lcall resetadd
ret
mmenu:
db 'Central station Debug Mode',0Dh,0Ah,0Dh,0Ah
db 'Testing 115200bps speed mode with 16 byte chunks',0Dh,0Ah
db '12345678901234567890123456789012345678901234567890123456789012345678901234567890',0Dh,0Ah
db '80x25 screen?',0Dh,0Ah
db 'uC to Linux screen test',0Dh,0Ah
db 'Working? Yes? Maybe? No?',0Dh,0Ah
db 'Test 100% wireless. 115200bps speed',0Dh,0Ah,0Dh,0Ah
db 'TEST IS NOW FINISHED!',0Dh,0Ah
db 'TX TEST OVER WAITING FOR RECEPTION!',0Dh,0Ah,00h

Related

Qt Creator Debugging NASM

I am learning NASM x64 and searched for debugger. Not only did I try gdb, but SASM also. Unfortunately, they are both not viable options for me(*). According to
the issue (see comments section), it is possible to do debug NASM in Qt Creator. But i can't hit any breakpoints and see register values, for example.
Here my setup is:
CMakeLists.txt
cmake_minimum_required(VERSION 3.24)
enable_language(ASM_NASM)
project(untitled LANGUAGES C ASM_NASM)
SET(CMAKE_BUILD_TYPE Debug)
set(CMAKE_ASM_NASM_FLAGS_DEBUG "-g -F dwarf")
set(CMAKE_ASM_NASM_OBJECT_FORMAT elf64)
set(CMAKE_ASM_NASM_LINK_EXECUTABLE "ld <CMAKE_ASM_NASM_LINK_FLAGS> <LINK_FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>")
set(CMAKE_ASM_NASM_COMPILE_OBJECT "<CMAKE_ASM_NASM_COMPILER> <INCLUDES> \
<FLAGS> -f ${CMAKE_ASM_NASM_OBJECT_FORMAT} -o <OBJECT> <SOURCE>")
add_executable(untilted main.c lab.asm)
set_target_properties(untilted PROPERTIES NASM_OBJ_FORMAT elf64)
main.c
extern void asm_main();
int main()
{
asm_main();
return 0;
}
lab.asm
bits 64
SYS_WRITE equ 1
STDOUT equ 1
NEW_LINE_CHARACTER equ 10
section .data
msg: db "Hello, World", NEW_LINE_CHARACTER, 0 ; length is 13
section .text
global asm_main
asm_main:
xor rax, rax
mov rax, SYS_WRITE
mov rdi, STDOUT
mov rsi, msg
mov rdx, 13
syscall
ret
Debugging main.c and i hit breakpoint
Can't hit breakpoint in lab.asm
So that's it. As you can see my build configuration is set to Debug(left down corner). So I would really appreciate any help with this issue.
*P.S yeah, probably, i am to lazy and not hardcore enough to use gdb 'cause i once did a lab in 600 lines in NASM and it was good but at the same time bad experience( it is better to learn c++ metaprogramming i dunno). Besides, it is a bit difficult to study and work when a russian drone or rocket can fly into your house, but as a ukrainian zoomer i am used to that kind of crap. I know that nobody asked to tell you that story, but i just want to finish my second year at university with systems programming done and be alive.

PWM output always 0 on proteus (using atmega328p and assembly on microchip studio to write the code)

i am currently trying to setup a servo motor on a proteus circuit using a PWM signal (TCR0) on the atmega328p , i went through several manual books to setup the bits in the TCCR0A and TCCR0B registers and the duty cycle value on OCR0A and expect an output on the OCR0A port.
.INCLUDE "M328PDEF.INC"
.ORG 0x00
SBI DDRD , 6 // port D6 output for PWM
LOOP:
CALL TESTN
CALL delay
JMP LOOP
TESTN:
LDI R20 , 127 //duty cycle value (50%)
STS OCR0A , R20 // duty cycle on OCR0A
LDI R17 , 0b10000011
STS TCCR0A , R17 //Non-Inverting Fast PWM mode 3 using OCRA
LDI R18 , 0b00000001
STS TCCR0B , R18 //No-Prescalar
//LDS R20 , OCR0A // trying to output on port D6 but maybe doesnt matter
//OUT PORTD , R20
delay:
LDI R16 , 1
L0: LDI R17 , 1
L1: LDI R18 , 20
L2: DEC R18
BRNE L2
DEC R17
BRNE L1
DEC R16
BRNE L0
RET
proteus circuit results
Can anyone tell me what i am missing ?
Any help is appreciated ,thanks in advance
I expected to receive a signal on portD,6 which will result in a movement of the servo but the output is always 0 on that port (not sure why other ports have output values without actually defining anything regarding them)
Edit : if anyone knows a good compiler from C to avr assembly it would be appreciated.
It would be much simpler if you write the code in C and then look at the disassembly.
Note: Timer0 registers (TCCR0A, TCCR0B, OCR0A etc) are in the lower range of I/O-registers and can be accessed both by memory access operations (like STS, LDS) using their memory address, and also can be accessed by IN and OUT operations, using their I/O-register number.
How your constant are declared? For example, if TCCR0A equals to 0x44, then it is a memory address, and STS should be used. But if it equals to 0x24 then it is IO-register number and OUT should be used instead.
Also, there is no ret at the end of TESTN in your code
Turns out i needed prescaling since the atmega32p has 18MHz processor speed which wont provide the 20ms duration signal needed by a servo motor. I used 8 prescaling (TCCR0B = 0b00000100) and adjust the angle through the OCR0A values.

How to simulate Real Time Interrupt in 68HC11 THRSim11 simulator

how do you simulate the RTI (Real Time Interrupt) in the 68HC11 THRSim11 simulator (see http://www.hc11.demon.nl/thrsim11/thrsim11.htm)? the following program works at the 68HC11 module but not in THRSim11. It's a test program to read from Analog to Digital Converter and display results to serial port using RTI. I tried the RTI interrupt vector 00EB and FFF0. My chip is the 68H711E9 with the following memory map.
I expected the THRSim11 to simulate the interrupt vector. When running the "again BRA again" just before CLI (enable Interrupt). It must be running the subroutine that reads from ADC and display to serial. It works perfectly in my 68HC711E9 Evaluation board with buffalo
REGBS EQU $1000 ;start of registers
BAUD EQU REGBS+$2B ;sci baud reg
SCCR1 EQU REGBS+$2C ;sci control1 reg
SCCR2 EQU REGBS+$2D ;sci control2 reg
SCSR EQU REGBS+$2E ;sci status reg
SCDR EQU REGBS+$2F ;sci data reg
TMSK2 EQU REGBS+$24 ;Timer Interrupt Mask Register 2
TFLG2 EQU REGBS+$25 ;Timer Interrupt Flag Register 2
ADR3 EQU $1033 ;ADC address 3
OPTION EQU $1039 ;ADC enable
SCS EQU $2E ;SCSR low bit
ADCTL EQU $1030 ;ADC setting
ADCT EQU $30 ;ADC setting low bit
PACTL EQU $1026 ;Pulse Accumulator control
***************************************************************
* Main program starts here *
***************************************************************
ORG $0110
* ORG $E000
start LDS #$01FF ;set stack pointer
JSR ONSCI ;initialize serial port
JSR t_init ;initialize timer
CLI ;enable interrupts
again BRA again
************************************************************
* t_init - Initialize the RTI timer
************************************************************
t_init LDAA #$01 ; set PTR1 and PTR0 to 0 and 1
STAA PACTL ;which leads to an RTI rate of 8.19 ms
LDAA #$40
STAA TFLG2 ;clears RTIF flag (write 1 in it!)
STAA TMSK2 ;sets RTII to allow interruptssec
RTS
************************************************************
* ADC_SERIAL - timer overflow interrupt service routine
************************************************************
ADC_SERIAL
LDX #REGBS
LDAA #%00010010
STAA ADCTL
LDAB #6
ADF00 DECB
BNE ADF00
ldaa ADR3 ; read ADC value
ldab SCSR ; read first Status
staa SCDR ; save in TX Register
BUFFS BRCLR SCS,X #$80 BUFFS
CLRFLG LDAA #$40
STAA TFLG2 ;clear RTIF
RTI ;return from ISR
************************************************************
* ONSCI() - Initialize the SCI for 9600
* baud at 8 MHz
************************************************************
ONSCI LDAA #$30
STAA BAUD baud register
LDAA #$00
STAA SCCR1
LDAA #$0C
STAA SCCR2 enable
LDAA #%10011010 ; enable the ADC
STAA OPTION
RTS
* Interrupt Vectors for BUFALO monitor
* ORG $FFF0 ;RTI vector for microcontroller
*
ORG $00EB ;Real Time Interrupt under Buffalo monitor
JMP ADC_SERIAL ;this instruction is executed every
* time there is a timer overflow
Presumably you mixed up "vector table" and "jump table". The HC11 expects an address at $FFF0, not an instruction.
In contrast, the Buffalo monitor expects an instruction at $00EB.
ORG $FFF0 ;RTI vector for microcontroller
FDB ADC_SERIAL
ORG $FFFE ;Reset vector for microcontroller
FDB start
As you will note, the same holds true for the reset vector at $FFFE.
With these changes it works for me. Be aware that the simulation is really slow*, depending on the number and kind of views opened.
Another side note: You send the single byte of conversion result without further processing. The serial receiver view of the simulator will try to interpret this byte as an ASCII character, and only if it fails, show a decimal number in angles. You might want to consider to convert the conversion result into a human readable value. The most simple solution may be a hex representation.
EDIT:
*) A simulator needs to be factors faster than the original machine, depending on the specific implementation of the simulation. In this case, they seem to have used a quite slow way. The documentation has some words on this. To gain some speed, close any view you don't need, and use the fastest PC you can get. To gain some understanding, think about how slow a simulation would be if it will simulate the analog electronics with each semiconductor of the chip. And even that is just a model, the "real" world currently starts at quantum mechanics.
Without further measure, you cannot use Buffalo's jump table entries, because the Buffalo monitor is not included in the simulator.
If you want to use an unmodified version of your firmware, you will need to add at least the used parts of the Buffalo monitor. If you have the monitor as a file loadable by the simulator, you might want to load it before loading your application.
The least you could do is to provide the jump table yourself, placing the appropriate address of the jump in the vector:
ORG $FFF0 ;RTI vector for microcontroller
FDB $00EB
The "problem" with the ASCII interpretation becomes visible, if values of printable characters are sent. Put the slider in the first third, and you will see some letter or digit or punctuation. Slide it minimally up and down for other characters. Yes, terminals can be dumb, and this one is no exception. Actually it is a little bit smart and shows the printable characters instead of their ASCII value. Additionally it knows at least CR (carriage return, $0D, decimal 13) and LF (line feed, $0A, decimal 10). You might want to write a little test program that sends "Hello, world", CR, LF. Or another experiment that sends all values from $00 to $FF.
The meaning of a value always depends on its interpretation. This terminal interprets values as ASCII characters, if possible.

MSP430 MEMORY ADDRESS IN CCS6

I've wrote my very first MSP-EXP430F5529LP LED on/off program.
and I wanted to analyze my program. but I had problem at my first step.
I extracted my LED program from board and I've got unclear data. (3)
that's my first question. what is that file format? I mean I want to know file format for my memory dump file. (3)
my second question is that why CCS 6 doesn't indicate memory address properly?
I know that MSP430 is 16 bit MCU. so every memory address should be 16 bit-width. but my assembly code(2) which is copied from CCS6 Disassembly View show me address just like 01XXXX format.
relative data dereference and execution flow branches work well. but why CSS6 make me confused? I mean I want to know that why CCS6 display memory addresse 24 bit-width??
anyone who know where is TI document which explain what I want to know, please let me know. please just don't mention MSP430xxxx User's Guide.
sorry for my english :(
1.c code
#include <msp430f5529.h>
volatile unsigned int i;
void main(void) {
WDTCTL = WDTPW | WDTHOLD;
P1DIR |= 0x01;
while(1){
P1OUT ^= 0x01;
for(i = 20000;i > 0; i--);
}
}
2.assembly code
0100c2: 40B2 5A80 015C MOV.W #0x5a80,&Watchdog_Timer_WDTCTL
0100c8: D3D2 0204 BIS.B #1,&Port_A_PADIR
0100cc: E3D2 0202 XOR.B #1,&Port_A_PAOUT
0100d0: 40B2 4E20 2400 MOV.W #0x4e20,&i
0100d6: 3C02 JMP (0x00dc)
0100d8: 8392 2400 DEC.W &i
0100dc: 9382 2400 TST.W &i
0100e0: 27F5 JEQ (0x00cc)
0100e2: 3FFA JMP (0x00d8)
0100e4: 4303 NOP
0100e6: D032 0010 BIS.W #0x0010,SR
0100ea: 3FFD JMP (0x00e6)
0100ec: 431C MOV.W #1,R12
0100ee: 0110 RETA
0100f0: 4303 NOP
0100f2: 3FFF JMP (0x00f2)
3.memory dump (MAIN)
:1044000031400044b113ec000c930224b1130000be
:104410000c43b113c200b113f00000000200000011
:10442000840001001a44000000240000ffffffff89
:10443000ffffffffffffffffffffffffffffffff8c
:10444000ffffffffffffffffffffffffffffffff7c
...
...
If one reads the User Guide (which is why they exist) then one is informed that the Program Counter is 20-bit. So, now you know why you see an address in the 20-bit range.
Link to the MSP430 User Guide: http://www.ti.com/lit/ug/slau208n/slau208n.pdf
The 20-bit PC (PC/R0) points to the next instruction to be executed.
Each instruction uses an even number of bytes (2, 4, 6, or 8 bytes),
and the PC is incremented accordingly. Instruction accesses are
performed on word boundaries, and the PC is aligned to even addresses.
Figure 6-3 shows the PC.
The above is an excerpt from the User Guide. I cannot emphasis this enough - but you really need to read the User Guide. Not doing so and attempting to program microcontrollers is perlious to your mental health.
The memory dump seems to be in the Intel hex file format https://en.wikipedia.org/wiki/Intel_HEX

Asterisk Pointer in Assembly (I32 / x86) [duplicate]

This question already has an answer here:
How does the jmp instruction work in att assembly in this instance
(1 answer)
Closed 3 years ago.
The offending line:
8048f70: ff 24 85 00 a4 04 08 jmp *0x804a400(,%eax,4)
There is no instruction in the disassembled code at location 804a400 (my list ends at 804a247)
When I check to see what's at that memory location I get:
(gdb) x/c 0x804a40c
0x804a40c: -103 '\231'
(gdb) x/t 0x804a40c
0x804a40c: 10011001
(gdb) x/s 0x804a40c
0x804a40c: "\231\217\004\b\222\217\004\b\211\217\004\b\202\217\004\bw\217\004\b\002"
(gdb) x/3x 0x804a40c
0x804a40c: 0x99 0x8f 0x04
What exactly is this jmp statement trying to do?
That instruction is an indirect jump. This means that the memory address specified is not the jump target, but a pointer to the jump target.
First, the instruction loads the value at the memory address:
*0x804a400(,%eax,4)
which is more sensibly written as:
0x804a400 + %eax * 4 // %eax can be negative
And then set the %eip to that value.
The best way to decipher these is to use the Intel Programmer's Reference manual. Table 2-2 in Volume 2A provides a break down the ModR/M byte and in this case the SIB byte also.

Resources