I am not coding neither understand much about it, although have to run an experiment in the laboratory, and have to use HTBasic to receive data from 2 GPIB devices (IEE 488) and one RS232 (this one is a high precision lab scale ).
I am changing/adding to an old script that someone else wrote. It was only to receive data from the 2 GPIB devices.
I must get data only every 15-30 minutes (the experiment will run for a month) and even though I successfully receive data from the lab scale (device interface select code = 12) they only arrive "synchronous" for a loop every e.g. 10ms (milliseconds). If I make it every 1 second the data are "old" e.g. I removed the item from the scale and instead of showing me ZERO "0" it still shows the weight. Imagine what if I ask for a loop every 15 minutes.
It seems that received data arrives in order one by one and displayed with that order. Probably there is an internal buffer or something that stores them. Does any one know how to OPEN and CLOSE the communication with a serial device on DEMAND? e.g. for GPIB devices I am sending a command like TALK (talk) and UNT (untalk) every time the loop takes place, but I can't find out how to do this with the serial device.
I tried the CONTROL 12,100;0 and CONTROL 12,100;1 (XOFF/XON) but it didn't work.
Here is one of the scripts I tried that gives me the correct weighting values, but for loops every 0.01 seconds.
LOOP
ENTER 12 USING "10D";W
PRINT TABXY(70,20),"wEIGHT IS:";W
WAIT 0.01
END LOOP
END
I would suggest trying using handshake control.
You can control the Serial Interface using the HTBASIC CONTROL statement.
For example, you can turn:
CONTROL 9,5;0 ! use DTR and RTS
CONTROL 9,12;0 ! read DSR, CD, and CTS
You should also use Interface handles as such:
ASSIGN #Serial TO 9 ! Opens the Serial Port, and clears buffer
ASSIGN #Serial TO * ! Closes the Serial Port
This should work for 15 Min Cycle (900 Sec):
ON CYCLE 900 GOSUB Get_Serial
LOOP
END LOOP
STOP
Get_serial:!
ASSIGN #Serial TO 12
ENTER #Serial USING "10D";W
RETURN
END
Hi guys and thanks for your answers (they came a bit late though).
Most probably both of your suggestions may work (havent try them ..maybe in the near future)
What I did those days to solve my problem , was basicly something like : LOOP continuusly and print on specific area of the screen (CRT) the serial device values (weight in gramms) , ONDELAY of specific time (eg every 15minutes) go to NEW LOOP (called it LOOOP in the code) which tells program to grab the value of RS232 labscale from the screen (not the device directly) and ofcourse the 2 GPIB devices, and after that repeat the continuus LOOP to show on CRT screen the real/continuus labscale values to prevent bufffer from being full ... and so everything worked smoothly..
I understand that this in not a GOOD way for coding, but as i said i am a rookie in this field ...BUT IT WORKED
So the code i wrote was something like:
[....]
33 ASSIGN #Scale TO 12
52 ENTER #Scale USING "10D";Weight
54 PRINT TABXY(70,20),"Captured LabScaLE Weight=";Weight;" g"
55 A=Weight
90 ON DELAY T GOTO Loooop
92 LOOP
93 ENTER #Scale USING "10D";A
94 A=A
95 PRINT TABXY(65,35);A;TABXY(65,35);
96 !
97 END LOOP
98 !
99 Loooop: GOTO 100 !GRAPSE THN GRAMMH PU AKOLOYTHEI PX 171
100 !
101 !
102 ENTER CRT;A
116 !==============================================START LISTENING FROM RS232 labscale (DISPAY ON CRT CONTINUUS DATA)======
117 !ENTER CRT;Weight
118 Weight=A
119 PRINT TABXY(70,20),"Captured LabScaLE Weight=";Weight;" g"
120 !
121 !==============================================START LISTENING FROM GDS CTRL=======
122 SEND 9;UNL UNT MLA TALK 14 DATA CHR$(255)
123 ENTER 9 USING "#,B,4D,6D";S,Pressurea,Volumea
124 SEND 9;UNT DATA CHR$(255)
128 !=============================================START LISTENING FROM GDS CTRL=======
129 !
130 SEND 8;UNL UNT MLA TALK 13 DATA CHR$(255)
131 ENTER 8 USING "#,B,4D,6D";S,Pressureb,Volumeb
132 SEND 8;UNT DATA CHR$(255)
[.....]
150 GOTO 92 !
Related
I'm trying to perform joins in SQLite on Hebrew words including vowel points and cantillation marks and it appears that the sources being joined built the components in different orders, such that the final strings/words appear identical on the screen but fail to match when they should. I'm pretty sure all sources are UTF-8.
I don't see a built in method of unicode normalization in SQLite, which would be the easiest solution; but found this link of Tcl Unicode but it looks a bit old using Tcl 8.3 and Unicode 1.0. Is this the most up-to-date method of normalizing unicode in Tcl and is it appropriate for Hebrew?
If Tcl doesn't have a viable method for Hebrew, is there a preferred scripting language for handling Hebrew that could be used to generate normalized strings for joining? I'm using Manjaro Linux but am a bit of a novice at most of this.
I'm capable enough with JavaScript, browser extensions, and the SQLite C API to pass the data from C to the browser to be normalized and back again to be stored in the database; but I figured there is likely a better method. I refer to the browser because I assume that they area kept most up to date for obvious reasons.
Thank you for any guidance you may be able to provide.
I used the following code in attempt to make the procedure provided by #DonalFellows a SQLite function such that it was close to not bringing the data into Tcl. I'm not sure how SQLite functions really work in that respect but that is why I tried it. I used the foreach loop solely to print some indication that the query was running and progressing because it took about an hour to complete.
However, that's probably pretty good for my ten-year old machine and the fact that it ran on 1) the Hebrew with vowel points, 2) with vowel points and cantillation marks and 3) the Septuagint translation of the Hebrew for all thirty-nine books of the Old Testament, and then two different manuscripts of Koine Greek for all twenty-seven books of the New Testament in that hour.
I still have to run the normalization on the other two sources to know how effective this is overall; however, after running it on this one which is the most involved of the three, I ran the joins again and the number of matches nearly doubled.
proc normalize {string {form nfc}} {
exec uconv -f utf-8 -t utf-8 -x "::$form;" << $string
}
# Arguments are: dbws function NAME ?SWITCHES? SCRIPT
dbws function normalize -returntype text -deterministic -directonly { normalize }
foreach { b } { 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 } {
puts "Working on book $b"
dbws eval { update src_original set uni_norm = normalize(original) where book_no=$b }
puts "Completed book $b"
}
If you're not in a hurry, you can pass the data through uconv. You'll need to be careful when working with non-normalized data though; Tcl's pretty aggressive about format conversion on input and output. (Just… not about normalization; the normalization tables are huge and most code doesn't need to do it.)
proc normalize {string {form nfc}} {
exec uconv -f utf-8 -t utf-8 -x "::$form;" << $string
}
The code above only really works on systems where the system encoding is UTF-8… but that's almost everywhere that has uconv in the first place.
Useful normalization forms are: nfc, nfd, nfkc and nfkd. Pick one and force all your text to be in it (ideally on ingestion into the database… but I've seen so many broken DBs in this regard that I suggest being careful).
I am going to run load test using JMeter over Amazon AWS and I need to know before starting my test how much traffic is it going to generate over network.
The criteria that Amazon has in their policy is:
sustains, in aggregate, for more than 1 minute, over 1 Gbps (1 billion bits per second) or 1 Gpps (1 billion packets per second). If my test is going to exceed this criteria we need to submit a form before starting the test.
so how can I know if the test is going to exceed this number or not?
Run your test with 1 virtual user and 1 iteration in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.csv
To get an approximate figure open Open the result.csv file using Aggregate Report listener and there you will have 2 columns: Received KB/sec and Sent KB/sec. Multiply it by the duration of your test in seconds and you will get the number you're looking for.
alternatively you can open the result.csv file using MS Excel or LibreOffice Calc or equivalent where you can sum bytes and sentBytes columns and get the traffic with 1 byte precision:
For instance, Epson's TM-T88V esc/pos manual is like
I need to supply my printer with a buffer that contains the FS C code to change the code page.
# doesnt work, the actual code page number just gets interpreted literally
\x1C \x43 0
\x1C \x430
How do you read escpos manuals?
ESC/POS printer accept data as a series of bytes in your shown example to select Kanji characters for the printer. You need to send three bytes 1C 43 0 the printer will then executes the command.
However, before you send a command to an esc/pos printer you need to send a series of command first and then ends with cut command.
For example
Initialize the printer with 1B 40
Switch to standard mode. 1B 53
your commands 1C 43 0
your data.
Print command. 0A
last command cut paper. 1D 56 00
your printer's programming manual should have detail these steps.
I'm new to Processing and serial communication and my problem seems very elementary. I'm trying to send data over from Processing to an Arduino but it seems that something gets lost in translation.
On Arduino I'm running this super simple sketch:
void setup()
{
Serial.begin(9600);
}
void loop()
{
if (Serial.available() > 0) Serial.println(Serial.read());
}
The intention there is to read a byte from serial and then write it right back so I can see what is going on. Testing this with the included serial monitor behaves as I'd expect: typing in "0" returns "48". So far so good.
Things start to go wrong when I run this Processing sketch:
import processing.serial.*;
Serial myPort;
void setup()
{
//frameRate(10);
myPort = new Serial(this, Serial.list()[4], 9600);
}
void draw()
{
myPort.write("0");
}
I woud expect this code to return an endless stream of "48" with the rate of ten entries per second, since I understand that is the default frame rate. What really happens is something like this:
48
48
488
48
48
48
48
48
48
48
48
48
48
488
48
48
48
It seems like every 10th (give or take a few) byte has a good change of being messed up. Instead of "48" I get back stuff like " ", "488", "4848" or "488". What's even more interesting is that if I uncomment the frameRate(10); line in my Processing script I would expect absolutely nothing to happen, since I'm setting the fps from ten to ten. Instead I start to see stuff like this:
4
44
4848844
444448444844
4
44444444
844
444
844444
8
88
8
4488
84
48
4448444844
444
So basically the numbers make so sense anymore.
It took me quite some time to narrow the problem down to this serial communication and a few hours of Googling around related topics has given me no hints about what might be going on. Any pointers toward further reading or things to try would be greatly appreciated.
I'm using the latest version of Processing downloaded today and my system is a MBP running Mountain Lion with all updates installed.
After some further testing, it seems that having the serial monitor on while sending bytes from Processing messes both up for a yet unknown reason. I assume there is some sort of fighting over serial bus priority and the data ends up broken.
Solution: don't try to use multiple programs to read serial data simultaneously.
As you have figured out, if you have multiple programs trying to read data from Serial, it could result it such problems.
You can also try out the Software Serial Arduino library, which allows you to use any pins as serial pins.
try replace:
myPort.write("0");
with:
myPort.write("0")-'0';
Because
ascii 0 --> 48
ascii 1 --> 49
.
.
.
ISO/IEC 2022 defines the C0 and C1 control codes. The C0 set are the familiar codes between 0x00 and 0x1f in ASCII, ISO-8859-1 and UTF-8 (eg. ESC, CR, LF).
Some VT100 terminal emulators (eg. screen(1), PuTTY) support the C1 set, too. These are the values between 0x80 and 0x9f (so, for example, 0x84 moves the cursor down a line).
I am displaying user-supplied input. I do not wish the user input to be able to alter the terminal state (eg. move the cursor). I am currently filtering out the character codes in the C0 set; however I would like to conditionally filter out the C1 set too, if terminal will interpret them as control codes.
Is there a way of getting this information from a database like termcap?
The only way to do it that I can think of is using C1 requests and testing the return value:
$ echo `echo -en "\x9bc"`
^[[?1;2c
$ echo `echo -e "\x9b5n"`
^[[0n
$ echo `echo -e "\x9b6n"`
^[[39;1R
$ echo `echo -e "\x9b0x" `
^[[2;1;1;112;112;1;0x
The above ones are:
CSI c Primary DA; request Device Attributes
CSI 5 n DSR; Device Status Report
CSI 6 n CPR; Cursor Position Report
CSI 0 x DECREQTPARM; Request Terminal Parameters
The terminfo/termcap that ESR maintains (link) has a couple of these requests in user strings 7 and 9 (user7/u7, user9/u9):
# INTERPRETATION OF USER CAPABILITIES
#
# The System V Release 4 and XPG4 terminfo format defines ten string
# capabilities for use by applications, .... In this file, we use
# certain of these capabilities to describe functions which are not covered
# by terminfo. The mapping is as follows:
#
# u9 terminal enquire string (equiv. to ANSI/ECMA-48 DA)
# u8 terminal answerback description
# u7 cursor position request (equiv. to VT100/ANSI/ECMA-48 DSR 6)
# u6 cursor position report (equiv. to ANSI/ECMA-48 CPR)
#
# The terminal enquire string should elicit an answerback response
# from the terminal. Common values for will be ^E (on older ASCII
# terminals) or \E[c (on newer VT100/ANSI/ECMA-48-compatible terminals).
#
# The cursor position request () string should elicit a cursor position
# report. A typical value (for VT100 terminals) is \E[6n.
#
# The terminal answerback description (u8) must consist of an expected
# answerback string. The string may contain the following scanf(3)-like
# escapes:
#
# %c Accept any character
# %[...] Accept any number of characters in the given set
#
# The cursor position report () string must contain two scanf(3)-style
# %d format elements. The first of these must correspond to the Y coordinate
# and the second to the %d. If the string contains the sequence %i, it is
# taken as an instruction to decrement each value after reading it (this is
# the inverse sense from the cup string). The typical CPR value is
# \E[%i%d;%dR (on VT100/ANSI/ECMA-48-compatible terminals).
#
# These capabilities are used by tac(1m), the terminfo action checker
# (distributed with ncurses 5.0).
Example:
$ echo `tput u7`
^[[39;1R
$ echo `tput u9`
^[[?1;2c
Of course, if you only want to prevent display corruption, you can use less approach, and let the user switch between displaying/not displaying control characters (-r and -R options in less). Also, if you know your output charset, ISO-8859 charsets have the C1 range reserved for control codes (so they have no printable chars in that range).
Actually, PuTTY does not appear to support C1 controls.
The usual way of testing this feature is with vttest, which provides menu entries for changing the input- and output- separately to use 8-bit controls. PuTTY fails the sanity-check for each of those menu entries, and if the check is disabled, the result confirms that PuTTY does not honor those controls.
I don't think there's a straightforward way to query whether the terminal supports them. You can try nasty hacky workarounds (like print them and then query the cursor position) but I really don't recommend anything along these lines.
I think you could just filter out these C1 codes unconditionally. Unicode declares the U+0080.. U+009F range as control characters anyway, I don't think you should ever use them for anything different.
(Note: you used the example 0x84 for cursor down. It's in fact U+0084 encoded in whichever encoding the terminal uses, e.g. 0xC2 0x84 for UTF-8.)
Doing it 100% automatically is challenging at best. Many, if not most, Unix interfaces are smart (xterms and whatnot), but you don't actually know if connected to an ASR33 or a PC running MSDOS.
You could try some of the terminal interrogation escape sequences and timeout if there is no reply. But then you might have to fall back and maybe ask the user what kind of terminal they are using.