I'm trying to use Tera Term to send binary data over the serial port. When I try to send data that has the MSB = '1' Tera Term send multiple 8bit characters. I've tried modifying the TERATERM.ini file as follows:
Meta8Bit=raw
Accept8BitCtrl=on
Send8BitCtrl=on
My .ttl script is very simple with the following loop just for testing purposes:
while 1
send $80
mpause 2
endwhile
With the above script I get multiple 8bit character sends every 2ms - not 1 80hex send.
Thanks
You can do
while 1
sendfile 'binaryfile' 1
mpause 2
endwhile
where binaryfile is a file that contains 0x80 in binary
This is a terrible work around, I wish I had a better solution.
Change Language from UTF-8 to English
Setup --> General --> Language
Set to English
Then you can send HEX >= 0x80
Related
I read an Executable file (exe) and I saw \x00#, I know that 0x00 is NULL, but what does the # represent in hexdecimal? I couldn't find any information about this.
Example
b'MZ\x90\x00\x03\x00\x00\x00\x04\x00\x00\x00\xff\xff\x00\x00\xb8\x00\x00\x00\x00\x00\x00\x00#\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xc0\x00\x00\x00\x0e\x1f\xba\x0e\x00\xb4\t\xcd!\xb8\x01L\xcd!This program cannot be run in DOS mode.\r\r\n'
It means nothing special, you are simply viewing raw binary in some manner of bad editor and # simply means value 0x40, or perhaps 0x0040. Perhaps the editor is using a symbol format (some UTF?) where most of these raw hex values don't make sense, but it was able to represent 0x40 or 0x0040 as #.
I'm guessing this binary goo is from the PE Format for Windows executables.
I am reading data from a TCP port in TCL using a socket. The messages do not end with any newline, but they do container a header containing the number of bytes of data.
I have the following code to read two byte of data from the socket (16bit little endian) and convert that into an integer I can then use in a loop to read the rest of the data:
binary scan [read $Socket 2] s* length
In this case $Socket is my socket and it has been configured to use binary encoding.
This works well except where either the upper or lower byte is 0x0D. It appears TCL reads 0x0D and 0x0A both as '\n', which then defaults to 0x0A, so the code does work correctly. For example 13 is read as 10. How do I stop this from happening?
The socket should be placed into binary mode if you're moving binary data across it.
chan configure $Socket -translation binary
# Use [fconfigure] instead of [chan configure] in older Tcl versions
This disables all the automatic processing that Tcl usually does — your description says you're having a problem with end-of-line conversion — and makes it so that read will just deliver a string of the bytes (formally a string of characters between U+000000 and U+0000FF, and internally using an efficient in-memory encoding scheme).
For files, you can include b in the control mode when opening to get this done for you. For sockets, you need to do this yourself.
In addition to configuring binary encoding, you also need to set the translation to 'lf'. As this is a frequently occurring situation, there is a shorthand for making these two settings:
fconfigure $Socket -translation binary
When ever we program a micro controller we convert the C file into a hex file and then we burn that into controller.
My question is that why a hex file only, is that hex file a hexadecimal version of binary executable?
If yes then why do not we use a binary file instead?
if you are talking about an "intel hex" file the reason being is that it is ascii which makes it easy to examine and parse. true, it is innefficient in one way but compared to a raw binary it might be smaller. With a raw binary you only have one if any address associated, the starting address (not embedded in the file) in a hex file or motorola srecord which is a similar and often used format as well. both the ihex and srec formats are basically lines of ascii/hex numbers that represent a type a starting address, length data, and a checksum. there are non data lines in there but much of it will be data. so if your program has a few bytes at address 0x1000 and a few bytes at 0x80000000 then a .bin file would be at its smallest 0x8000000-0x1000 plus a few bytes but would typically be 0x80000000+ a few bytes (right, 2 gigabytes). Where an ihex or srec would be in the dozens of bytes total. the ihex and srec have built in checksums to help protect against corrupt files, not perfect of course but better than nothing at all...
Since then elf and coff and other formats have become popular. these are also based on blocks of data and not a complete memory image. these are binary, not ascii formats, but they are not just a memory image. chunks of data with address, type, etc are provided.
Because the ihex and srec are so simple to create and parse they will continue to be used for a long time, it does not take a lot of resources in a bootloader for example to handle receiving an ihex or srec file. (same with a binary of course, but the binary has a lot of fill data in it costing a lot of unnecessary transmission time).
I am using SIM900 GSM module connect to my AVR Microcontroller.
I tested it with FT232 to see transmitting data.
First Micro sends AT it will response OK
AT OK
AT+CMGF=1 OK
AT+CMGS="+9893XXXXXX" returns ERROR and doesn't show ">"
Could anybody advise me what to do?
Command AT+CSCS? will answer You what type of sms-encoding is used. Properly answer is "GSM", and if not, You should set it by command AT+CSCS="GSM".
And remember about "Ctrl+Z" (not "Enter") as a finish of sms text, please.
You aren't passing all the parameters to the command.
The command format is:
AT+CMGS=<number><CR><message><CTRL-Z>
Where:
<CR> = ASCII character 13
<CTRL-Z> = ASCII character 26
You have passed only the number and without the <CR> you won't see the > note for the message.
Example:
AT+CMGS="+9893XXXXXX"
> This is the message.→
The response is:
+CMGS:<mr>
OK
Where <mr> is the message reference.
If AT+CSCS? command returns UCS2, then many arguments need to be encoded as hex string of UTF-16 encoding, so the phone number would become "002B0039003800390033...", and the SMS text would need to be encoded in the same way. If you don't need UCS2 encoding, then the easiest thing to do is to switch to GSM encoding (or another encoding from the available set as shown by AT+CSCS=? command)
Sometimes the issue is the text mode you are in. Enter AT+CMGF? and you should receive +CMGF: 1. If instead you receive +CMGF: 0, enter AT+CMGF=1. This changes the message format from PDU mode to Text mode. I'm not sure what either of those mean exactly, but this fixed my issue.
SIM 800 AT command manual
ISO/IEC 2022 defines the C0 and C1 control codes. The C0 set are the familiar codes between 0x00 and 0x1f in ASCII, ISO-8859-1 and UTF-8 (eg. ESC, CR, LF).
Some VT100 terminal emulators (eg. screen(1), PuTTY) support the C1 set, too. These are the values between 0x80 and 0x9f (so, for example, 0x84 moves the cursor down a line).
I am displaying user-supplied input. I do not wish the user input to be able to alter the terminal state (eg. move the cursor). I am currently filtering out the character codes in the C0 set; however I would like to conditionally filter out the C1 set too, if terminal will interpret them as control codes.
Is there a way of getting this information from a database like termcap?
The only way to do it that I can think of is using C1 requests and testing the return value:
$ echo `echo -en "\x9bc"`
^[[?1;2c
$ echo `echo -e "\x9b5n"`
^[[0n
$ echo `echo -e "\x9b6n"`
^[[39;1R
$ echo `echo -e "\x9b0x" `
^[[2;1;1;112;112;1;0x
The above ones are:
CSI c Primary DA; request Device Attributes
CSI 5 n DSR; Device Status Report
CSI 6 n CPR; Cursor Position Report
CSI 0 x DECREQTPARM; Request Terminal Parameters
The terminfo/termcap that ESR maintains (link) has a couple of these requests in user strings 7 and 9 (user7/u7, user9/u9):
# INTERPRETATION OF USER CAPABILITIES
#
# The System V Release 4 and XPG4 terminfo format defines ten string
# capabilities for use by applications, .... In this file, we use
# certain of these capabilities to describe functions which are not covered
# by terminfo. The mapping is as follows:
#
# u9 terminal enquire string (equiv. to ANSI/ECMA-48 DA)
# u8 terminal answerback description
# u7 cursor position request (equiv. to VT100/ANSI/ECMA-48 DSR 6)
# u6 cursor position report (equiv. to ANSI/ECMA-48 CPR)
#
# The terminal enquire string should elicit an answerback response
# from the terminal. Common values for will be ^E (on older ASCII
# terminals) or \E[c (on newer VT100/ANSI/ECMA-48-compatible terminals).
#
# The cursor position request () string should elicit a cursor position
# report. A typical value (for VT100 terminals) is \E[6n.
#
# The terminal answerback description (u8) must consist of an expected
# answerback string. The string may contain the following scanf(3)-like
# escapes:
#
# %c Accept any character
# %[...] Accept any number of characters in the given set
#
# The cursor position report () string must contain two scanf(3)-style
# %d format elements. The first of these must correspond to the Y coordinate
# and the second to the %d. If the string contains the sequence %i, it is
# taken as an instruction to decrement each value after reading it (this is
# the inverse sense from the cup string). The typical CPR value is
# \E[%i%d;%dR (on VT100/ANSI/ECMA-48-compatible terminals).
#
# These capabilities are used by tac(1m), the terminfo action checker
# (distributed with ncurses 5.0).
Example:
$ echo `tput u7`
^[[39;1R
$ echo `tput u9`
^[[?1;2c
Of course, if you only want to prevent display corruption, you can use less approach, and let the user switch between displaying/not displaying control characters (-r and -R options in less). Also, if you know your output charset, ISO-8859 charsets have the C1 range reserved for control codes (so they have no printable chars in that range).
Actually, PuTTY does not appear to support C1 controls.
The usual way of testing this feature is with vttest, which provides menu entries for changing the input- and output- separately to use 8-bit controls. PuTTY fails the sanity-check for each of those menu entries, and if the check is disabled, the result confirms that PuTTY does not honor those controls.
I don't think there's a straightforward way to query whether the terminal supports them. You can try nasty hacky workarounds (like print them and then query the cursor position) but I really don't recommend anything along these lines.
I think you could just filter out these C1 codes unconditionally. Unicode declares the U+0080.. U+009F range as control characters anyway, I don't think you should ever use them for anything different.
(Note: you used the example 0x84 for cursor down. It's in fact U+0084 encoded in whichever encoding the terminal uses, e.g. 0xC2 0x84 for UTF-8.)
Doing it 100% automatically is challenging at best. Many, if not most, Unix interfaces are smart (xterms and whatnot), but you don't actually know if connected to an ASR33 or a PC running MSDOS.
You could try some of the terminal interrogation escape sequences and timeout if there is no reply. But then you might have to fall back and maybe ask the user what kind of terminal they are using.