How to read an escpos guide? - hex

For instance, Epson's TM-T88V esc/pos manual is like
I need to supply my printer with a buffer that contains the FS C code to change the code page.
# doesnt work, the actual code page number just gets interpreted literally
\x1C \x43 0
\x1C \x430
How do you read escpos manuals?

ESC/POS printer accept data as a series of bytes in your shown example to select Kanji characters for the printer. You need to send three bytes 1C 43 0 the printer will then executes the command.
However, before you send a command to an esc/pos printer you need to send a series of command first and then ends with cut command.
For example
Initialize the printer with 1B 40
Switch to standard mode. 1B 53
your commands 1C 43 0
your data.
Print command. 0A
last command cut paper. 1D 56 00
your printer's programming manual should have detail these steps.

Related

Unicode normalization of Hebrew in Tcl, SQLite, or whatever else will work in Linux OS

I'm trying to perform joins in SQLite on Hebrew words including vowel points and cantillation marks and it appears that the sources being joined built the components in different orders, such that the final strings/words appear identical on the screen but fail to match when they should. I'm pretty sure all sources are UTF-8.
I don't see a built in method of unicode normalization in SQLite, which would be the easiest solution; but found this link of Tcl Unicode but it looks a bit old using Tcl 8.3 and Unicode 1.0. Is this the most up-to-date method of normalizing unicode in Tcl and is it appropriate for Hebrew?
If Tcl doesn't have a viable method for Hebrew, is there a preferred scripting language for handling Hebrew that could be used to generate normalized strings for joining? I'm using Manjaro Linux but am a bit of a novice at most of this.
I'm capable enough with JavaScript, browser extensions, and the SQLite C API to pass the data from C to the browser to be normalized and back again to be stored in the database; but I figured there is likely a better method. I refer to the browser because I assume that they area kept most up to date for obvious reasons.
Thank you for any guidance you may be able to provide.
I used the following code in attempt to make the procedure provided by #DonalFellows a SQLite function such that it was close to not bringing the data into Tcl. I'm not sure how SQLite functions really work in that respect but that is why I tried it. I used the foreach loop solely to print some indication that the query was running and progressing because it took about an hour to complete.
However, that's probably pretty good for my ten-year old machine and the fact that it ran on 1) the Hebrew with vowel points, 2) with vowel points and cantillation marks and 3) the Septuagint translation of the Hebrew for all thirty-nine books of the Old Testament, and then two different manuscripts of Koine Greek for all twenty-seven books of the New Testament in that hour.
I still have to run the normalization on the other two sources to know how effective this is overall; however, after running it on this one which is the most involved of the three, I ran the joins again and the number of matches nearly doubled.
proc normalize {string {form nfc}} {
exec uconv -f utf-8 -t utf-8 -x "::$form;" << $string
}
# Arguments are: dbws function NAME ?SWITCHES? SCRIPT
dbws function normalize -returntype text -deterministic -directonly { normalize }
foreach { b } { 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 } {
puts "Working on book $b"
dbws eval { update src_original set uni_norm = normalize(original) where book_no=$b }
puts "Completed book $b"
}
If you're not in a hurry, you can pass the data through uconv. You'll need to be careful when working with non-normalized data though; Tcl's pretty aggressive about format conversion on input and output. (Just… not about normalization; the normalization tables are huge and most code doesn't need to do it.)
proc normalize {string {form nfc}} {
exec uconv -f utf-8 -t utf-8 -x "::$form;" << $string
}
The code above only really works on systems where the system encoding is UTF-8… but that's almost everywhere that has uconv in the first place.
Useful normalization forms are: nfc, nfd, nfkc and nfkd. Pick one and force all your text to be in it (ideally on ingestion into the database… but I've seen so many broken DBs in this regard that I suggest being careful).

HTBasic for receiving data from a RS232 device

I am not coding neither understand much about it, although have to run an experiment in the laboratory, and have to use HTBasic to receive data from 2 GPIB devices (IEE 488) and one RS232 (this one is a high precision lab scale ).
I am changing/adding to an old script that someone else wrote. It was only to receive data from the 2 GPIB devices.
I must get data only every 15-30 minutes (the experiment will run for a month) and even though I successfully receive data from the lab scale (device interface select code = 12) they only arrive "synchronous" for a loop every e.g. 10ms (milliseconds). If I make it every 1 second the data are "old" e.g. I removed the item from the scale and instead of showing me ZERO "0" it still shows the weight. Imagine what if I ask for a loop every 15 minutes.
It seems that received data arrives in order one by one and displayed with that order. Probably there is an internal buffer or something that stores them. Does any one know how to OPEN and CLOSE the communication with a serial device on DEMAND? e.g. for GPIB devices I am sending a command like TALK (talk) and UNT (untalk) every time the loop takes place, but I can't find out how to do this with the serial device.
I tried the CONTROL 12,100;0 and CONTROL 12,100;1 (XOFF/XON) but it didn't work.
Here is one of the scripts I tried that gives me the correct weighting values, but for loops every 0.01 seconds.
LOOP
ENTER 12 USING "10D";W
PRINT TABXY(70,20),"wEIGHT IS:";W
WAIT 0.01
END LOOP
END
I would suggest trying using handshake control.
You can control the Serial Interface using the HTBASIC CONTROL statement.
For example, you can turn:
CONTROL 9,5;0 ! use DTR and RTS
CONTROL 9,12;0 ! read DSR, CD, and CTS
You should also use Interface handles as such:
ASSIGN #Serial TO 9 ! Opens the Serial Port, and clears buffer
ASSIGN #Serial TO * ! Closes the Serial Port
This should work for 15 Min Cycle (900 Sec):
ON CYCLE 900 GOSUB Get_Serial
LOOP
END LOOP
STOP
Get_serial:!
ASSIGN #Serial TO 12
ENTER #Serial USING "10D";W
RETURN
END
Hi guys and thanks for your answers (they came a bit late though).
Most probably both of your suggestions may work (havent try them ..maybe in the near future)
What I did those days to solve my problem , was basicly something like : LOOP continuusly and print on specific area of the screen (CRT) the serial device values (weight in gramms) , ONDELAY of specific time (eg every 15minutes) go to NEW LOOP (called it LOOOP in the code) which tells program to grab the value of RS232 labscale from the screen (not the device directly) and ofcourse the 2 GPIB devices, and after that repeat the continuus LOOP to show on CRT screen the real/continuus labscale values to prevent bufffer from being full ... and so everything worked smoothly..
I understand that this in not a GOOD way for coding, but as i said i am a rookie in this field ...BUT IT WORKED
So the code i wrote was something like:
[....]
33 ASSIGN #Scale TO 12
52 ENTER #Scale USING "10D";Weight
54 PRINT TABXY(70,20),"Captured LabScaLE Weight=";Weight;" g"
55 A=Weight
90 ON DELAY T GOTO Loooop
92 LOOP
93 ENTER #Scale USING "10D";A
94 A=A
95 PRINT TABXY(65,35);A;TABXY(65,35);
96 !
97 END LOOP
98 !
99 Loooop: GOTO 100 !GRAPSE THN GRAMMH PU AKOLOYTHEI PX 171
100 !
101 !
102 ENTER CRT;A
116 !==============================================START LISTENING FROM RS232 labscale (DISPAY ON CRT CONTINUUS DATA)======
117 !ENTER CRT;Weight
118 Weight=A
119 PRINT TABXY(70,20),"Captured LabScaLE Weight=";Weight;" g"
120 !
121 !==============================================START LISTENING FROM GDS CTRL=======
122 SEND 9;UNL UNT MLA TALK 14 DATA CHR$(255)
123 ENTER 9 USING "#,B,4D,6D";S,Pressurea,Volumea
124 SEND 9;UNT DATA CHR$(255)
128 !=============================================START LISTENING FROM GDS CTRL=======
129 !
130 SEND 8;UNL UNT MLA TALK 13 DATA CHR$(255)
131 ENTER 8 USING "#,B,4D,6D";S,Pressureb,Volumeb
132 SEND 8;UNT DATA CHR$(255)
[.....]
150 GOTO 92 !

Parsing Hex dump

I recently came across a kaitai struct to deal with arbitrary binary formats. Now the thing is I have a hex-dump what I mean by that is I have a file which i want to parse and its in hex format when i use the visualizer in the web ide of kaitai for the mapping of data, it's converting the hex data again into hex is there any way i can convert the data from hex and get the exact hex data when i use the visualizer.
for example consider this
3335363330
and then again its mapping it to 33 33 33 35 33 36 33 33 33 30
thanks in advance
Currently the Kaitai WebIDE & the console visualizer (ksv) does not support reading hex-encoded files, only raw binary files.
The solution is to convert the hex-encoded (text) file to a binary one first and then load the binary file into Kaitai.
You can do this by calling xxd -r -p <input_file >output_file on Linux or eg. calling this small Python script: python -c "open('output_file','wb').write(open('input_file','r').read().strip().decode('hex'))". The latter works on any machine where Python 2 is installed.

QCryptographicHash - what is SHA3 here in reality?

I got such a piece of code:
void SHAPresenter::hashData(QString data)
{
QCryptographicHash* newHash = new QCryptographicHash(QCryptographicHash::Sha3_224);
newHash->addData(data.toUtf8());
QByteArray hashResultByteArray = newHash->result();
setHashedData(QString(hashResultByteArray.toHex()));
delete newHash;
}
According to Qt spec, QCryptographicHash::Sha3_224 should "generate an SHA3-224 hash sum. Introduced in Qt 5.1". I wanted to compare result of that code to something other source to check whether I put data in correct manner. I found site: https://emn178.github.io/online-tools/sha3_224.html
So we have SHA3_224 in both cases. The problem is that the first will generate such a byte string from "test":
3be30a9ff64f34a5861116c5198987ad780165f8366e67aff4760b5e
And the second:
3797bf0afbbfca4a7bbba7602a2b552746876517a7f9b7ce2db0ae7b
Not similar at all. But there is also a site that do "Keccak-224":
https://emn178.github.io/online-tools/keccak_224.html
And here result is:
3be30a9ff64f34a5861116c5198987ad780165f8366e67aff4760b5e
I know that SHA3 is based on Keccak's functions - but what is the issue here? Which of these two implementations follows NIST FIPS 202 in proper manner and how do we know that?
I'm writing a Keccak library for Java at the moment, so I had the toys handy to test an initial suspicion.
First a brief summary. Keccak is a sponge function which can take a number of parameters (bitrate, capacity, domain suffix, and output length). SHA-3 is simply a subset of Keccak where these values have been chosen and standardised by NIST (in FIPS PUB 202).
In the case of SHA3-224, the parameters are as follows:
bitrate: 1152
capacity: 448
domain suffix: "01"
output length: 224 (hence the name SHA3-224)
The important thing to note is that the domain suffix is a bitstring which gets appended after the input message and before the padding. The domain suffix is an optional way to differentiate different applications of the Keccak function (such as SHA3, SHAKE, RawSHAKE, etc). All SHA3 functions use "01" as a domain suffix.
Based on the documentation, I get the impression that Keccak initially had no domain suffix concept, and the known-answer tests provided by the Keccak team require that no domain suffix is used.
So, to your problem. If we take the String "test" and convert it to a byte array using ASCII or UTF-8 encoding (because Keccak works on binary, so text must be converted into bytes or bits first, and it's therefore important to decide on which character encoding to use) then feed it to a true SHA3-224 hash function we'll get the following result (represented in hexadecimal, 16 bytes to a line for easy reading):
37 97 BF 0A FB BF CA 4A 7B BB A7 60 2A 2B 55 27
46 87 65 17 A7 F9 B7 CE 2D B0 AE 7B
SHA3-224 can be summarised as Keccak[1152, 448](M || "01", 224) where the M || "01" means "append 01 after the input message and before multi-rate padding".
However, without a domain suffix we get Keccak[1152, 448](M, 224) where the lonesome M means that no suffix bits are appended, and the multi-rate padding will begin immediately after the input message. If we feed your same input "test" message to this Keccak function which does not use a domain suffix then we get the following result (again in hex):
3B E3 0A 9F F6 4F 34 A5 86 11 16 C5 19 89 87 AD
78 01 65 F8 36 6E 67 AF F4 76 0B 5E
So this result indicates that the function is not SHA3-224.
Which all means that the difference in output you are seeing is explained entirely by the presence or absence of a domain suffix of "01" (which was my immediate suspicion on reading your question). Anything which claims to be SHA3 must use a "01" domain suffix, so be very wary of tools which behave differently. Check the documentation carefully to make sure that they don't require you to specify the desired domain suffix when creating/using the object or function, but anything which claims to be SHA3 really should not make it possible to forget the suffix bits.
This is a bug in Qt and reported here and Fixed in Qt5.9

Newline while writing a text file in Ada

I am opening a text file in Ada with the following code:
Open (File => out_parcial_variante1, Name => "c.txt", Mode => append_file);
put(File => out_parcial_variante1, Item=> "r");
close(out_parcial_variante1);
The file as a structure like this inside:
01 #510.00:1003.00,512.04:1110.00,515.00:998.00,-98.00,-100.00
<second empty line, this text is not in the file>
Note that besides the initial line the cursor is in a second line there with nothing written.
Whenever my code writes in the file, this happens:
01 #510.00:1003.00,512.04:1110.00,515.00:998.00,-98.00,-100.00
r
It creates another newline instead of appending on the 2nd line like this:
01 #510.00:1003.00,512.04:1110.00,515.00:998.00,-98.00,-100.00
r
How do I fix this?
EDIT: It's a pointer problem since I read the whole line before, but I try to close and open the file again and the pointer remains in the second line instead of going back to the beginning.
I threw together a quick test program with GNAT 2012 on Windows and it works as expected.
Code:
with Ada.Text_IO;
use Ada.Text_IO;
procedure Append_Test is
OPV: File_Type;
begin
Open (OPV, Append_File, "c.txt");
Put (OPV, "r");
Close (OPV);
end Append_Test;
I programmatically created the c.txt file, using Put_Line to output the text, this was the contents of the file:
01 #510.00:1003.00,512.04:1110.00,515.00:998.00,-98.00,-100.00
I used Cygwin's od -t x1 to dump the file, and saw that it terminated with a 0d 0a EOL sequence, i.e. CR/LF.
Running the above code resulted in a file containing the expected output:
01 #510.00:1003.00,512.04:1110.00,515.00:998.00,-98.00,-100.00
r
Again dumping with od showed the file ending with 0d 0a 72 0d 0a. That's the original EOL, to which is appended the 'r' and another EOL.
If this isn't happening for you, then it's not clear what you're actually doing. (Note that on Linux the 0d 0a sequences would instead be simply 0a.)

Resources