What are the values of r16 and r17 after executing this code?
ldi r16, 0x06 ;load immediate
ldi r17, 0x0c ;load immediate
lsl r16 ;logical shift left
eor r16, r17 ;exclusive or
So I know that r16 = 12 after the logical shift left, making it equal to r17. Does the exclusive or set r16 to 0 and r17 stays at 12? Or do they both get set to zero? Is the zero flag set?
From the obvious source, http://www.atmel.com/webdoc/avrassembler/avrassembler.wb_instructions.Arithmetic_and_Logic_Instructions.html :
EOR Logical Exclusive OR
Rd = Rd EOR Rr
So yes, r16 gets overwritten, but r17 stays unchanged.
http://www.atmel.com/webdoc/avrassembler/avrassembler.wb_EOR.html
even spezifies what happens with the Zero Flag in the status register: It's set to (¯ denoting the inverse,• denoting logical and)
R7¯ • R6¯ • R5¯ • R4¯ • R3¯ • R2¯ • R1¯ • R0¯
Related
This is a continuation of the question posed in How do you run a SCL file in MPLAB without a "Run SCL" button
I have an assembly code for PIC18F458 that gets data from channel 0 (RA0) of ADC and displays the result on PORTC and PORTD.
Although, I have managed to verify that the code operates as desired within Proteus, I am struggling to do the same within the MPLAB X simulator environment using a SCL file, and I suspect that this is due to the way that the text files, referred to by it, are laid out. (Please see below)
testbench for "pic18f458" is
begin
process is
file datafile : text;
variable intVar : integer;
variable sampling_voltage : integer;
variable fileStatus : file_open_status;
variable fileLine : line;
begin
loop
report("Analog injection started...");
file_open(fileStatus, datafile, "text2.txt", read_mode);
if fileStatus == open_ok then
report("Reading the values file...");
while endfile(datafile) == false loop
wait until ADCON0.GO_nDONE == '1';
report("Conversion started");
readline(datafile, fileLine);
wait for 400 ns;
read(fileLine, intVar);
sampling_voltage := intVar; -- sample input voltage
wait until ADCON0.GO_nDONE == '0';
report("Conversion ended");
if ADCON1.ADFM == '0' then -- left justified
ADRESH <= sampling_voltage / 4;
ADRESL <= sampling_voltage * 64;
else -- right justified
ADRESH <= sampling_voltage / 256;
ADRESL <= sampling_voltage;
end if;
end loop;
file_close(datafile);
end if;
end loop;
wait;
end process;
end testbench;
The SCL file refers to 1 of 2 text files during my debugging session (text.txt and text2.txt) laid out differently. The first consists of decimal numbers from 0 to 255 and the second consists of decimal numbers representing voltages in mV.
txt.txt
128
192
238
255
238
192
128
64
17
0
17
64
128
text2.txt
250 mV
500 mV
750 mV
1000 mV
1250 mV
1500 mV
1750 mv
2000 mV
2250 mv
2500 mv
2750 mv
3000 mv
3250 mV
3500 mV
3750 mV
4000 mV
4250 mV
4500 mV
4750 mv
5000 mV
In both cases, the ADC seems to just be churning out the numbers that it is receiving, instead of converting them. (Please see images below)
This is obviously no good, as I am getting values within the ADRES register that are greater than 10-bits, in particularly, with regards to my text2.txt values.
ADC Results with text.txt
ADC Results with text2.txt
Therefore, my question is how do I correctly debug my A/D convertor code in MPLAB X v5.05 simulator using a SCL file or any other methods?
I'm trying to understand the algorithm used for compression value = 1 with the Epson ESCP2 print command, "ESC-i". I have a hex dump of a raw print file which looks, in part, like the hexdump below (note little-endian format issues).
000006a 1b ( U 05 00 08 08 08 40 0b
units; (page1=08), (vt1=08), (hz1=08), (base2=40 0b=0xb40=2880)
...
00000c0 691b 0112 6802 0101 de00
esc i 12 01 02 68 01 01 00
print color1, compress1, bits1, bytes2, lines2, data...
color1 = 0x12 = 18 = light cyan
compress1 = 1
bits1 (bits/pixel) = 0x2 = 2
bytes2 is ??? = 0x0168 = 360
lines2 is # lines to print = 0x0001 = 1
00000c9 de 1200 9a05 6959
00000d0 5999 a565 5999 6566 5996 9695 655a fd56
00000e0 1f66 9a59 6656 6566 5996 9665 9659 6666
00000f0 6559 9999 9565 6695 9965 a665 6666 6969
0000100 5566 95fe 9919 6596 5996 5696 9666 665a
0000110 5956 6669 0456 1044 0041 4110 0040 8140
0000120 9000 0d00
1b0c 1b40 5228 0008 5200 4d45
FF esc # esc ( R 00 REMOTE1
The difficulty I'm having is how to decode the data, starting at 00000c9, given 2 bits/pixel and the count of 360. It's my understanding this is some form of tiff or rle encoding, but I can't decode it in a way that makes sense. The output was produced by gutenprint plugin for GIMP.
Any help would be much appreciated.
The byte count is not a count of the bytes in the input stream; it is a count of the bytes in the input stream as expanded to an uncompressed form. So when expanded, there should be a total of 360 bytes. The input bytes are interpreted as either a count of bytes to follow, if positive, in which case the count is the byte value +1; and if negative the count is a count of the number of times the immediately following byte should be expanded, again, +1. The 0D at the end is a terminating carriage return for the line as a whole.
The input stream is only considered as a string of whole bytes, despite the fact that the individual pixel/nozzle controls are only 2 bits each. So it is not really possible to use a repeat count for something like a 3-nozzle sequence; a repeat count must always specify a full byte 4-nozzle combination.
The above example then specifies:
0xde00 => repeat 0x00 35 times
0x12 => use the next 19 bytes as is
0xfd66 => repeat 0x66 4 times
0x1f => use the next 32 bytes as is
etc.
I have the following code for network protocol implementation. As the protocol is big endian, I wanted to use the Bit_Order attribute and High_Order_First value but it seems I made a mistake.
With Ada.Unchecked_Conversion;
with Ada.Text_IO; use Ada.Text_IO;
with System; use System;
procedure Bit_Extraction is
type Byte is range 0 .. (2**8)-1 with Size => 8;
type Command is (Read_Coils,
Read_Discrete_Inputs
) with Size => 7;
for Command use (Read_Coils => 1,
Read_Discrete_Inputs => 4);
type has_exception is new Boolean with Size => 1;
type Frame is record
Function_Code : Command;
Is_Exception : has_exception := False;
end record
with Pack => True,
Size => 8;
for Frame use
record
Function_Code at 0 range 0 .. 6;
Is_Exception at 0 range 7 .. 7;
end record;
for Frame'Bit_Order use High_Order_First;
for Frame'Scalar_Storage_Order use High_Order_First;
function To_Frame is new Ada.Unchecked_Conversion (Byte, Frame);
my_frame : Frame;
begin
my_frame := To_Frame (Byte'(16#32#)); -- Big endian version of 16#4#
Put_Line (Command'Image (my_frame.Function_Code)
& " "
& has_exception'Image (my_frame.Is_Exception));
end Bit_Extraction;
Compilation is ok but the result is
raised CONSTRAINT_ERROR : bit_extraction.adb:39 invalid data
What did I forget or misunderstand ?
UPDATE
The real record in fact is
type Frame is record
Transaction_Id : Transaction_Identifier;
Protocol_Id : Word := 0;
Frame_Length : Length;
Unit_Id : Unit_Identifier;
Function_Code : Command;
Is_Exception : Boolean := False;
end record with Size => 8 * 8, Pack => True;
for Frame use
record
Transaction_Id at 0 range 0 .. 15;
Protocol_Id at 2 range 0 .. 15;
Frame_Length at 4 range 0 .. 15;
Unit_id at 6 range 0 .. 7;
Function_Code at 7 range 0 .. 6;
Is_Exception at 7 range 7 .. 7;
end record;
Where Transaction_Identifier, Word and Length are 16-bit wide.
These ones are displayed correctly if I remove the Is_Exception field and extend Function_Code to 8 bits.
The dump of the frame to decode is as following:
00000000 00 01 00 00 00 09 11 03 06 02 2b 00 64 00 7f
So my only problem is really to extract the 8th bit of the last byte.
So,
for Frame use
record
Transaction_Id at 0 range 0 .. 15;
Protocol_Id at 2 range 0 .. 15;
Frame_Length at 4 range 0 .. 15;
Unit_id at 6 range 0 .. 7;
Function_Code at 7 range 0 .. 6;
Is_Exception at 7 range 7 .. 7;
end record;
It seems you want Is_Exception to be the the LSB of the last byte?
With for Frame'Bit_Order use System.High_Order_First; the LSB will be bit 7,
(also, 16#32# will never be -- Big endian version of 16#4#, the bit pattern just doesn't match)
It may be more intuitive and clear to specify all of your fields relative to the word they're in, rather than the byte:
Unit_ID at 6 range 0..7;
Function_Code at 6 range 8 .. 14;
Is_Exception at 6 range 15 .. 15;
Given the definition of Command above, the legal values for the last byte will then be:
2 -> READ_COILS FALSE
3 -> READ_COILS TRUE
8 -> READ_DISCRETE_INPUTS FALSE
9 -> READ_DISCRETE_INPUTS TRUE
BTW,
by applying your update to your original program, and adding/changing the following, you program works for me
add
with Interfaces;
add
type Byte_Array is array(1..8) of Byte with Pack;
change, since we don't know the definition
Transaction_ID : Interfaces.Unsigned_16;
Protocol_ID : Interfaces.Unsigned_16;
Frame_Length : Interfaces.Unsigned_16;
Unit_ID : Interfaces.Unsigned_8;
change
function To_Frame is new Ada.Unchecked_Conversion (Byte_Array, Frame);
change
my_frame := To_Frame (Byte_Array'(00, 01, 00, 00, 00, 09, 16#11#, 16#9#));
I finally found what was wrong.
In fact, the Modbus Ethernet Frame definition mentioned that, in case of exception, the returned code should be the function code plus 128 (0x80) (see explanation on Wikipedia). That's the reason why I wanted to represent it through a Boolean value but my representation clauses were wrong.
The correct clauses are these ones :
for Frame use
record
Transaction_Id at 0 range 0 .. 15;
Protocol_Id at 2 range 0 .. 15;
Frame_Length at 4 range 0 .. 15;
Unit_id at 6 range 0 .. 7;
Is_Exception at 6 range 8 .. 8;
Function_Code at 6 range 9 .. 15;
end record;
This way, the Modbus network protocol is correctly modelled (or not but at least, my code is working).
I really thank egilhh and simonwright for making me find what was wrong and explain the semantics behind the aspects.
Obviously, I don't know who reward :)
Your original record declaration works fine (GNAT complains about the Pack, warning: pragma Pack has no effect, no unplaced components). The problem is with working out the little-endian Byte.
---------------------------------
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | BE bit numbers
---------------------------------
| c c c c c c c | e |
---------------------------------
| 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 | LE bit numbers
---------------------------------
so if you want the Command to be Read_Discrete_Inputs, the Byte needs to have BE bit 4 (LE bit 3) set i.e. LE 16#8#.
Take a look at this AdaCore post on bit order and byte order to see how they handle it. After reading that, you will probably find that the bit order of your frame value is really 16#08#, which probably is not what you are expecting.
Big Endian / Little Endian typically refers to Byte order rather than bit order, so when you see that Network protocols are Big Endian, they mean Byte order. Avoid setting Bit_Order for your records. In modern systems, you will almost never need that.
Your record is only one byte in size, so byte order won't matter for it by itself. Byte order comes into play when you have larger field values (>8 bits long).
The bit_order pragma doesn't reverse the order that the bits appear in memory. It simply defines whether the most significant bit (left most) will be logically referred to as zero (High_Order_First) or the least significant bit will be referred to as zero (Low_Order_First) when interpreting the First_Bit and Last_Bit offsets from the byte position in the representation clause. Keep in mind that these offsets are taken from the MSB or LSB of the scalar the record component belongs to AS A VALUE. So in order for the byte positions to carry the same meaning on a little endian CPU as they do on a big endian CPU (as well as the in memory representation of multibyte machine scalars, which exist when one or more record components with the same byte position have a last_bit value which exceeds the capacity of a single byte) then 'Scalar_Storage_Order must also be specified.
I'm trying to discover devices, from a coordinator, in my network.
So I sent an ND command to the coordinator and I'm correctly receiving response from other Xbee.
The next step will be to store the information I've received in a web application, in oder to send commands and data.
However, what I'm still missing is some parts in the frame respose. So far I've mapped the frame like this:
1 7E start frame
===== =================== MESSAGE LENGHT
2-3 0x00 0x19 -> 25
===== =================== PACKET TYPE
4 88 -> response to a remote AT command
5 02 frame ID
===== =================== AT COMMAND
6-7 0x4E 0x44 "ND"
8 00 status byte (00 -> OK)
===== =================== MY - Remote Address
9-10 0x17 0x85
===== =================== SH - SERIAL NUMBER HIGH
11-14 0x00 0x13 0xA2 0x00
===== =================== SL - SERIAL NUMBER LOW
15-18 0x40 0xB4 0x50 0x23
===== =================== SIGNAL
19 20
= ======== NI - Node Identifier
20 00
21 FF
22 FE
23 01
24 00
25 C1
26 05
27 10
28 1E
===== ===== CHECKSUM (25th bytes from MESSAGE LENGHT)
29 19
So, where I can find in this response the address of the device ?
My guess is in the NI part of the message but, I haven't find any example/information of how the data are organised.
Could someone point me in the right direction?
As someone told me in the dig.com forum
NI<CR> (Variable length)
PARENT_NETWORK ADDRESS (2 Bytes)<CR>
DEVICE_TYPE (1 Byte: 0=Coord, 1=Router, 2=End Device)
STATUS (1 Byte: Reserved)
PROFILE_ID (2 Bytes)
MANUFACTURER_ID (2 Bytes
So, loking to my frame response:
00 --- Node Identifier variable, (here 1 byte = 00 because no value is set up).
FFFE --- parent network address (2 bytes)
01 --- device type
00 --- status
C105 --- profile id
101E --- manufacturing id
This, afaik, means that in this last part of the frame, no information about address of the device are given. Only information are the SL and SH.
The 16-bit network address is what you've labeled "MY" (0x1785), and the 64-bit MAC address is the combination of SH/SL (00 13 A2 00 40 B4 50 23).
I examined some MPEG-4 video headers and saw some byte arrays like below at the beginning:
00 00 01 B0 01 00 00 01 B5 89 13
I know 00 00 01 parts but what exactly B0 B1 and B5 89 13 parts mean? Actually, if I put this byte array infront of an MPEG-4 stream, it works fine.
But I don't know if those values works with different mpeg-4 stream sources ?
0x000001B0 -> Visual Object Sequence Start (VOSS) Code
0x000001B5 -> Visual Object Start (VOS) Code
You can find the complete MPEG-4 elementary video header details at "ISO/IEC 14496-2" documentation. Here are the details you asked for.
Visual Object Sequence Start (VOSS) Code
-> 4 bytes visual object sequence start code = long hex value of 0x000001B0
-> 8 bits profile/level indicator = 1 byte unsigned number
Visual Object Start (VOS) Code
-> 4 bytes visual object start code = long hex value of 0x000001B5
-> 1 bit has id marker flag = 1/4 nibble flag
_ID_Marker_Section_
-> 4 bits version id = 1 nibble unsigned value - only if marker is true
- version id types are ISO 14496-2 = 1
-> 3 bits visual object priority = 3/4 nibble unsigned value - only if marker is true
- priorities are 1 through to 7
-> 4 bits visual object type = 1 nibble unsigned value
- types are video = 1 ; still texture = 2 ; mesh = 3 ; face = 4
-> 1 bit video signal type = 1/4 nibble flag
- NOTE: if this is false Y has a sample range of 16 through to 235