Connect and Print with 2 or multiple identical thermal printers using the identical JavaPOS/OPOS driver? - opos

REQUEST
I created a POS app (based on Java) that needs to be connected to at least 2 identical thermal printers using the same driver and the printers should do:
at application start >> each printer is to be open(),claim(),setDeviceEnable(true) once
at application stop >> each printer is to be setDeviceEnable(false),release(),close()
food items to be printed on POSPrinter1
drink items to be printed on POSPrinter2
WHAT I TRIED SO FAR
I created two POSPrinter objects
private POSPrinter posprinter = initUSBPrinter("POSPrinter1");
private POSPrinter posprinter2 = initUSBPrinter("POSPrinter2");
private static POSPrinter initUSBPrinter(String printerName) {
POSPrinter ptr = new POSPrinter();
try {
ptr.open(printerName);
ptr.claim(1000);
ptr.setDeviceEnabled(true);
} catch (JposException e) {
e.printStackTrace();
}
return ptr ;
}
NO ERROR during application start when both printers were initiated/claimed
WORKS FINE while printing food items with POSPrinter1
ERROR OCCURRED shown below while trying to print drink items with POSPrinter2:
jpos.JposException: 103
at com.sewoo.jpos.POSPrinterService.printNormal(POSPrinterService.java:4130)
at jpos.POSPrinter.printNormal(Unknown Source)
at util.PrintManager.printOrderingHeaderByPrinter(PrintManager.java:628)
at util.PrintManager.printDrinkByPrinter(PrintManager.java:1359)
at util.PrintManager.printOrdering(PrintManager.java:1931)
at util.PrintManager.lambda$print$17(PrintManager.java:1668)
at ...
ISSUE
So while both printers were found during init/claim, somehow POSPrinter1 could print perfectly fine while POSPrinter2 throws jpos.POSPrinter.printNormal(Unknown Source). I suspect that since POSPrinter1 was claimed before POSPrinter2, the JavaPOS driver is only connected to POSPrinter1. So is it possible that a single JavaPOS driver can only talk to a single device?
To be able to talk to 2 printing devices simultaneously do I need to
have 2 JavaPOS drivers installed? If so, how can I configure my app to
do so?
THIS WORKS BUT SLOWS PRINTING PROCESS
connect to a either printer via open(),claim(),setDeviceEnable(true)
before the printjob
and disconnect the printer via setDeviceEnable(false),release(),close() after the successful run of the printjob
BUT connect/disconnect after each print job slows down the printing process considerably. I usually need to wait 3-5 secs after the printjob was sent to see the printer finally printing the slip.

Related

Is it possible in Erlang to use open_port in Windows?

Background:
I was given a task for college to implement an elevator's source code for an Arduino UNO with a user interface shield that contains: a 7-Segment display and a total of 8 buttons, one for each floor plus two to control the doors. The floor buttons also have an LED.
The hardware is only for human user input, but the brains is made entirely within Erlang. Before we got to implement this project in hardware, we used a simulation that uses wxWidgets to display the following UI: a window that displays the elevator doors, 6 more windows for each floor, and a window for the internal button array of the elevator. This button array is what I'm trying to implement within an Arduino.
Some code:
Our teacher gave us some confusing notes about the use of erlang:open_port/2 that I can't understand yet. The testing code we're using is the following:
-module(proUSB).
-export([start/1, order/2, loop/1, exit/2]).
start(Port_alias) ->
Port = open_port(Port_alias, []),
Pid = spawn(proUSB, loop, [Port]),
port_connect(Port, Pid),
{Port,Pid}.
order(Port,Value) ->
port_command(Port,Value).
loop(P) ->
receive
{P,{data,A}} ->
io:format("Received: ~p~n", [A]),
loop(P);
{'EXIT',P,_Reason} ->
port_close(P),
io:format("Unexpected finalization~n",[]);
exit ->
io:format("Proces finalization~n",[]);
Other -> io:format("Fi ~p~n",[Other])
end.
exit(Pid, Port) ->
port_close(Port),
Pid!exit.
As I've understood and successfully tested in Windows Subsystem for Linux (WSL) with the code mentioned above, all I have to execute in the Erlang shell is
> {Port, Pid} = proUSB:start("/dev/ttyS6").
{#Port<0.7>,<0.82.0>}
This will open the port. I can now send orders to the Arduino USART in the protocol we've designed. To light up the number three in the 7-segment display, we write
> proUSB:order(Port,"D3").
true
Now, if we press the floor button 1, you can see in the terminal
Received: "B1\n"
Received: "\n"
Received: "B1\n"
Received: "\n"
My question:
If I want to use Windows Powershell to do the same connection to the serial port, my teacher told us to write "COM6" instead of "/dev/ttyS6", but as I have tested, I can't get a connection but I get this error
> {Port, Pid} = proUSB:start("COM6").
** exception exit: einval
>
What am I missing? If WSL works with "/dev/ttyS6", why shouldn't it work with Windows when I can use the serial port with "RealTerm: Serial Capture Program"?

Sim800L lag/delay before incoming calls are visible to arduino

I use SIM800L GSM module to detect incoming calls and generally it works fine. The only problem is that sometimes it takes up to 8 RINGS before the GSM module tells arduino that someone is calling (before RING appears on the serial connection). It looks like a GSM Network congestion but I do not have such issues with normal calls (I mean calls between people). It happens to often - so it cannot be network/Provider overload. Does anybody else had such a problem?
ISP/Provider: Plus GSM in Poland
I don't put any code, because the problem is in different layer I think
sorry that I didn't answer earlier. I've tested it and it turned out that in bare minimum code it worked OK! I mean, I can see 'RING' on the serial monitor immediately after dialing the number. So it's not a hardware issue!
//bare minimum code:
void loop() {
if(serialSIM800.available()){
Serial.write(serialSIM800.read());
}
if(Serial.available()){
serialSIM800.write(Serial.read());
}
}
In my real code I need to compare calling number with the trusted list. To do that I saved all trusted numbers in the contact list on the sim card (with the common prefix name 'mytrusted'). So, in the main loop there's if statement:
while(mySerial.available()){
incomingByte = mySerial.read();
inputString += incomingByte;
}
if (inputString.indexOf("mytrusted") > 0){
isTrusted = 1;
Serial.println("A TRUSTED NUMBER IS CALLING");
}
After adding this "if condition" Arduino sometimes recognize trusted number after 1'st call, and sometimes after 4'th or 5'th. I'm not suspecting the if statement itself , but the preceding while loop, where incoming bytes are combined into one string.
Any ideas, what can be improved in this simply code?
It seems, I found workaround for my problem. I just send a simple 'AT' command every 20 seconds to SIM800L (it replies with 'OK' ). I use timer to count this 20 seconds interval (instead of simply delay function)
TimerObject *timer2 = new TimerObject(20000); //AT command interval
....
timer2->setOnTimer(&SendATCMD);
....
void SendATCMD () {
mySerial.println("AT");
timer2->Stop();
timer2->Start();
}
With this simple modification Arduino always sees incoming call immediately (after 1 ring)

BLE write private characteristic to UUID

I use a RN4020 BLE Module to communicate with the VALRT BT Button: https://vsnmobil.com/products/v-alrt/specs
Problem is, I need to send "80BEF5ACFF" within 30sec after connect to the specific private UDID "FFFFFFF5-00F7-4000-B000-000000000000" (see reference: https://github.com/HoyosIntegrity/V.ALRT-bluetooth-spec)
Problem is, I always get "ERR" back from RN4020.
Here is my Initialisation Code (which works):
sf,2 //Factory Reset
+ //echo on
sr,92000000 //configure as Master
r,1 //reboot
F //search devices
X //stop searching
E,0,001EC026C931 //connect to device with mac: 001EC026C931 which is my device
B //Bond
Get a "Connected" back and the Button quit it with a Beep.
Now I tried to Write
CUWV,FFFFFFF5-00F7-4000-B000-000000000000,80BEF5ACFF
with and without "-" but allways get a error back. At github are samples for Android and iOS but its not clear for me what I have to send...
Think I forgotten a prestep, but I dont know which one.
Strange is, when I connect and send "LC" I get this back:
180A
2A23,0012,02
2A24,0014,02
2A25,0016,02
2A26,0018,02
2A27,001A,02
2A28,001C,02
2A29,001E,02
2A2A,0020,02
1803
2A06,0025,0A
1802
2A06,0028,04
1804
2A07,002B,02
2A07,002C,10
180F
2A19,002F,02
2A19,0030,10
FFFFFFA000F74000B000000000000000
FFFFFFA100F74000B000000000000000,0034,0A
FFFFFFA200F74000B000000000000000,0037,02
FFFFFFA300F74000B000000000000000,003A,00
FFFFFFA300F74000B000000000000000,003B,10
FFFFFFA400F74000B000000000000000,003E,00
FFFFFFA400F74000B000000000000000,003F,10
FFFFFFA500F74000B000000000000000,0042,00
FFFFFFA500F74000B000000000000000,0043,10
END
That are some of the services but not all.

System.Io.Directory::GetFiles() Polling from AX 2009, Only Seeing New Files Every 10s

I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.

Memory leak while sending response from rebus handler

I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.

Resources