Boxing captor arduino nano 33 ble : problem excel - arduino

(sorry for bad english)
I make a project for school : it's a boxing captor made with the Arduino nano 33 ble sense. I only use the accelerometer and gyroscope include on the card. I want to know the acceleration of the boxer's hit in order to deduce the hit power of the boxer. Also the inclinaison of the punching bag will be useful.
The captor will be put on the punching bag.
The simple programm of accelerometer works but when i try to edit it in order to transfer the data excel i have a error message "Error :DATA < ASCII 10 or >ACSII 200 with PLX-DAQ....". I can't fix it..
can you help me please ?
thanks you for your help !!
the code is here

The error says that PLX-DAQ won't accept ASCII characters < 10 or > 200.
\t, horizontal tab is decimal 9. Make sure you only send characters which are decimal 10-200 in the ASCII table.
https://en.wikipedia.org/wiki/ASCII
From the PLX-DAQ manual:
For simple error checking, PLX-DAQ will indicate an error anytime that
a string containing characters < ASCII 10 or > ASCII 200 is received.
Values of ASCII 10 (Line Feed) are replaced with ASCII 13 (Carriage
Return) prior to processing.
I suggest you use comma instead of horizontal tab to separate the values.

Related

Selecting character code table in ESC/POS command

I need print non-english chars on print receipts, use thermal POS receipt printer. Xprinter XP-58III thermal POS receipt printer suppport generic ESC/POS commands.
As I know this should be done by setting character code table. In my case, target code page is 21.
The ESC/POS command for setting Code Page is 'ESC t n' (ASCII) or '1B 74 n' (Hex) where 'n' is page n of the character code table.
I case use of Hex form of command: shold I convert '21' to hex value, or I should use this number without converting, i.e. '1B 74 21'?
Also, where should be added thnis command, right after initialization code?
0x1B 0x40 0x1B 0x74 0x21
I use hex editor to add/edit ESC/POS codes inside the binary file.
EDIT: I solved the issue myself. In order to print any non-english characters on the POS receipt printer, we have to fulfill two conditions: 1) set the correct Code Page, and 2) set the corresponding encoding in receipt file or POS software (same encoding as Code Page). The correct Code Page for this POS printer model is 25 [WPC1257].
I solved the issue myself: the problem was in wrong Code Page set. The correct Code Page for this POS printer is 25 [WPC1257]. We have also set the corresponding encoding in receipt file (same encoding as Code Page).
Page 21 would be "Thai Character Code 11", where 21 is represented in decimal and you need to say "0x15" in binary. Then the command will look like "0x1B 0x74 0x15".
Regarding the command position, the ESC/POS commands are executed in place and affects thereafter in general. There may be no problem it you put it just after the initialization command. Just try.

Handle utf 8 characters in unix

I was trying to find a solution for my problem and after looking at the forums I couldn't so I'll explain my problem here.
We receive a csv file from a client with some special characters and encoded as unknown-8bit. We convert this csv file to xml using an awk script. With the xml file we make an API call to our system using utf-8 as default encoding. The response is an error with following information:
org.apache.xerces.impl.io.MalformedByteSequenceException: Invalid byte 1 of 1-byte UTF-8 sequence
The content of the file is as bellow:
151215901579-109617744500,sandra,sandra,Coesfeld,,Coesfeld,48653,DE,1,2.30,ASTRA 16V CAVALIER CALIBRA TURBO BLUE 10,53.82,GB,,.80,3,ASTRA 16V CAVALIER CALIBRA TURBO BLUE 10MM 4CORE IGNITION HT LEADS WIRES MLR.CR,,sandra#online.de,parcel1,Invalid Request,,%004865315500320004648880276,INTL,%004865315500320004648880276,1,INTL,DPD,180380,INTL,2.30,Send A2B Ltd,4th Floor,200 Gray’s Inn Road,LONDON,,WC1X8XZ,GBR,
I think the problem is in the field "200 Gray’s Inn Road" cause when I use utf-8 encoding it automatically converts "'" character by a x92 value.
Does anybody know how can I handle this?
Thanks in advance,
Sandra
Find out the actual encoding first, best would be asking the sender.
If you cannot do so, and also for sanity-checking, the unix command file is very useful for that (the linked page shows more options).
Next step, convert to UTF-8.
As it is obviously an ASCII-based encoding, you could just discard all non-ASCII or replace them on encoding, if that loss is acceptable.
As an alternative, open it in the editor of your choice and flip the encoding used for interpreting the data until you get something useful. My guess is you'll have either Latin-1 or Windows-1252, but check it for yourself.
Last step, do what you wanted to do, in comforting knowledge that you now have valid UTF-8.
Obviously, don't pretend it's UTF-8 if it isn't. Find out what the encoding is, or replace all non-ASCII characters with the UTF-8 REPLACEMENT CHARACTER sequence 0xEF 0xBF 0xBD.
Since you are able to view this particular sample just fine, you apparently already know which encoding it is (even if you don't know that you know -- it would be whatever your current set-up is using) -- I would guess Windows-1252 which uses 0x92 for a curvy right single quote.

How to view all special characters

I am facing hard time in removing the special characters from the csv file.
I have done a head -1 so i am trying to compare only 1 row.
wc filename shows it has 1396 byte count
If i go to the end of the file the curson ends at 1394.
In vi I do set list (to check for control characters), i see a $ (nothing after that), so i now know its the 1395 byte count.
Can someone please tell me where is the 1396th byte?
I am trying to compare 2 files using diff and its giving me a lot of trouble.
Please help.
The last 2 bytes of your line are \r\n - this is a Windows line ending. dos2unix converts this into a Unix line ending, which is \n - hence the line is shortened by 1 byte following conversion.

GS1 support in a QR encoder/decoder?

Very few QR encoders/decoders have (explicit) support for so-called GS1 encoding. Zint is one of the exceptions (under QR select GS-1 Data Mode), but its license prevents me from using it. Commercial offers, mainly from Tec-It, are costly, especially because I'm not interested in all other kinds of barcodes they support.
Is there a way to add GS1 support to any QR encoder/decoder without changing its source? For example, could I apply some algorithm to transform textual GTIN AI data into compatible binary? I think it should be possible, because after all, it's still QR. Please note that I am not a data coding expert - I'm just looking for a way to deal with this standard without paying a small fortune. So far, I found postscriptbarcode which does have support for it, and seems to use its own QR engine, but output quality is so-so and my PostScript skills are far too limited to figure out the algorithm.
As long as the library supports decoding of the FNC1 special character, it can be used to read GS1 codes. The FNC1 character is not a byte in the data-stream, but more of a formatting symbol.
The specification says that a leading FNC1-character is used to identify GS1 barcodes, and should be decoded as "]d2" (GS1 DataMatrix), "]C1" (GS1-128), "]e0" (GS1 DataBar Omnidirectional) or "]Q3" (GS1 QR Code). Any other FNC1-characters should be decoded as ASCII GS-characters (byte value 29).
Depending on the library, the leading FNC1 might be missing, or decoded as GS (not critical), or the embedded FNC1-characters might be missing (critical). The embedded FNC1-characters are used to delimit variable-length fields.
You can read the full specification here (pdf). The algorithm for decoding the data can be found under heading 7.9 Processing of Data from a GS1 Symbology using GS1 Application Identifiers (page 426).
The algorithm goes something like this:
Peek at the first character.
If it is ']',
If string does not start with ']C1' or ']e0' or ']d2' or ']Q3',
Not a GS1 barcode.
Stop.
Consume the caracters.
Else if it is <GS>,
Consume character.
Else,
No symbology identifier, assume GS1.
While not end of input,
Read the first two digits.
If they are in the table of valid codes,
Look up the length of the AI-code.
Read the rest of the code.
Look up the length of the field.
If it is variable-length,
Read until the next <FNC1> or <GS>.
Else,
Read the rest if the field.
Peek at the next character.
If it is <FNC1> or <GS>, consume it.
Save the read field.
Else,
Error: Invalid AI
The binary data in the QR Code is encoded as 4-bit tokens, with embedded data.
0111 -> Start Extended Channel Interpretation (ECI) Mode (special encodings).
0001, 0010, 0100, 1000 -> start numeric, alphanumeric, raw 8-bit, kanji encoded data.
0011 -> structured append (combine two or more QR Codes to one data-stream).
0101 -> FNC1 initial position.
1001 -> FNC1 other positions.
0000 -> End of stream (can be omitted if not enough space).
After an encoding specification comes the data-length, followed by the actual data. The meanings of the data bits depends on the encoding used. In between the data-blocks, you can squeeze FNC1 characters.
The QR Code specification (ISO/IEC 18004) unfortunately costs money (210 Franc). You might find some pirate version online though.
To create GS1 QR Codes, you need to be able to specify the FNC1-characters in the data. The library should either recognize the "]Q3" prefix and GS-characters, or allow you to write FNC1 tokens via some other method.
If you have some way to write the FNC1-characters, you can encode GS1 data as follows:
Write initial FNC1.
For each field,
Write the AI-code as decimal digits.
Write field data.
If the code is a variable-length field,
If not the last field,
Write FNC1 to terminate the field.
If possible, you should order the fields such that a variable-length field comes last.
As noted by Terry Burton in the comments; The FNC1 symbol in a GS1 QR Code can be encoded as % in alphanumeric data, and as GS in byte mode. To encode an actual percent symbol, you write it as %%.
To encode (01) 04912345123459 (15) 970331 (30) 128 (10) ABC123, you first combine it into the data string 01049123451234591597033130128%10ABC123 (% indicator is the encoded FNC1 symbol). This string is then written as
0101 - Initial FNC1, GS1 mode indicator
0001 - QR numeric mode
0000011101 - Data length (29)
<data bits for "01049123451234591597033130128">
0010 - QR alphanumeric mode
000001001 - Data length (9)
<data bits for "%10ABC123">
(Example from the ISO 18004:2006 specification)

Please help identify multi-byte character encoding scheme on ASP Classic page

I'm working with a 3rd party (Commidea.com) payment processing system and one of the parameters being sent along with the processing result is a "signature" field. This is used to provide a SHA1 hash of the result message wrapped in an RSA encrypted envelope to provide both integrity and authenticity control. I have the API from Commidea but it doesn't give details of encoding and uses artificially created signatures derived from Base64 strings to illustrate the examples.
I'm struggling to work out what encoding is being used on this parameter and hoped someone might recognise the quite distinctive pattern. I initially thought it was UTF8 but having looked at the individual characters I am less sure.
Here is a short sample of the content which was created by the following code where I am looping through each "byte" in the string:
sig = Request.Form("signature")
For x = 1 To LenB(sig)
s = s & AscB(MidB(sig,x,1)) & ","
Next
' Print s to a debug log file
When I look in the log I get something like this:
129,0,144,0,187,0,67,0,234,0,71,0,197,0,208,0,191,0,9,0,43,0,230,0,19,32,195,0,248,0,102,0,183,0,73,0,192,0,73,0,175,0,34,0,163,0,174,0,218,0,230,0,157,0,229,0,234,0,182,0,26,32,42,0,123,0,217,0,143,0,65,0,42,0,239,0,90,0,92,0,57,0,111,0,218,0,31,0,216,0,57,32,117,0,160,0,244,0,29,0,58,32,56,0,36,0,48,0,160,0,233,0,173,0,2,0,34,32,204,0,221,0,246,0,68,0,238,0,28,0,4,0,92,0,29,32,5,0,102,0,98,0,33,0,5,0,53,0,192,0,64,0,212,0,111,0,31,0,219,0,48,32,29,32,89,0,187,0,48,0,28,0,57,32,213,0,206,0,45,0,46,0,88,0,96,0,34,0,235,0,184,0,16,0,187,0,122,0,33,32,50,0,69,0,160,0,11,0,39,0,172,0,176,0,113,0,39,0,218,0,13,0,239,0,30,32,96,0,41,0,233,0,214,0,34,0,191,0,173,0,235,0,126,0,62,0,249,0,87,0,24,0,119,0,82,0
Note that every other value is a zero except occasionally where it is 32 (0x20). I'm familiar with UTF8 where it represents characters above 127 by using two bytes but if this was UTF8 encoding then I would expect the "32" value to be more like 194 (0xC2) or (0xC3) and the other value would be greater than 0x80.
Ultimately what I'm trying to do is convert this signature parameter into a hex encoded string (eg. "12ab0528...") which is then used by the RSA/SHA1 function to verify the message is intact. This part is already working but I can't for the life of me figure out how to get the signature parameter decoded.
For historical reasons we are having to use classic ASP and the SHA1/RSA functions are javascript based.
Any help would be much appreciated.
Regards,
Craig.
Update: Tried looking into UTF-16 encoding on Wikipedia and other sites. Can't find anything to explain why I am seeing only 0x20 or 0x00 in the (assumed) high order byte positions. I don't think this is relevant any more as the example below shows other values in this high order position.
Tried adding some code to log the values using Asc instead of AscB (Len,Mid instead of LenB,MidB too). Got some surprising results. Here is a new stream of byte-wise characters followed by the equivalent stream of word-wise (if you know what I mean) characters.
21,0,83,1,214,0,201,0,88,0,172,0,98,0,182,0,43,0,103,0,88,0,103,0,34,33,88,0,254,0,173,0,188,0,44,0,66,0,120,1,246,0,64,0,47,0,110,0,160,0,84,0,4,0,201,0,176,0,251,0,166,0,211,0,67,0,115,0,209,0,53,0,12,0,243,0,6,0,78,0,106,0,250,0,19,0,204,0,235,0,28,0,243,0,165,0,94,0,60,0,82,0,82,0,172,32,248,0,220,2,176,0,141,0,239,0,34,33,47,0,61,0,72,0,248,0,230,0,191,0,219,0,61,0,105,0,246,0,3,0,57,32,54,0,34,33,127,0,224,0,17,0,224,0,76,0,51,0,91,0,210,0,35,0,89,0,178,0,235,0,161,0,114,0,195,0,119,0,69,0,32,32,188,0,82,0,237,0,183,0,220,0,83,1,10,0,94,0,239,0,187,0,178,0,19,0,168,0,211,0,110,0,101,0,233,0,83,0,75,0,218,0,4,0,241,0,58,0,170,0,168,0,82,0,61,0,35,0,184,0,240,0,117,0,76,0,32,0,247,0,74,0,64,0,163,0
And now the word-wise data stream:
21,156,214,201,88,172,98,182,43,103,88,103,153,88,254,173,188,44,66,159,246,64,47,110,160,84,4,201,176,251,166,211,67,115,209,53,12,243,6,78,106,250,19,204,235,28,243,165,94,60,82,82,128,248,152,176,141,239,153,47,61,72,248,230,191,219,61,105,246,3,139,54,153,127,224,17,224,76,51,91,210,35,89,178,235,161,114,195,119,69,134,188,82,237,183,220,156,10,94,239,187,178,19,168,211,110,101,233,83,75,218,4,241,58,170,168,82,61,35,184,240,117,76,32,247,74,64,163
Note the second pair of byte-wise characters (83,1) seem to be interpreted as 156 in the word-wise stream. We also see (34,33) as 153 and (120,1) as 159 and (220,2) as 152. Does this give any clues as the encoding? Why are these 15[2369] values apparently being treated differently from other values?
What I'm trying to figure out is whether I should use the byte-wise data and carry out some post-processing to get back to the intended values or if I should trust the word-wise data with whatever implicit decoding it is apparently performing. At the moment, neither seem to give me a match between data content and signature so I need to change something.
Thanks.
Quick observation tells me that you are likely dealing with UTF-16. Start from there.

Resources