Twain driver concurrent requests - twain

It's possible to use one twain driver to manage concurrent request to two different multifunction printer?
I mean, if I have two MFPs , can I do two scan request in paralel using the same twain driver?

It depends on if your driver supports it.
From the TWAIN Spec page 125:
If an application attempts to connect to a Source that only supports a single connection when the source is already opened, the Source should respond with TWRC_FAILURE and TWCC_MAXCONNECTIONS.
Also from the spec on page 212:
The Source is responsible for managing this, not the Source Manager (the Source Manager does not know in advance how many connections the Source will support).
I tested this with a Fujitsu fi-7260 scanner and got the TWCC_MAXCONNECTIONS error with Twacker:

It could be possible. The reason being TWAIN just sits between the application and the images fed to it.
Imagine a scenario something on the below lines:
1) User clicked on the scan button.
2) You initiate the network layer calls to start the scan job.
3) Now instead to one printer you start scan jobs on two printers from two threads.
4) Let's say each of those threads populate the raw BMP data to a single data structure that is shared.
5) Once both threads are complete iterate over that shared data structure to pass the images to the application via the XFERIMAGE call.
Basic idea is to create an abstraction of two printers behind the scene.
Please let me know if my understanding of your question was not correct or you need other clarification.

If you implement it in the described way, it usually works only with two different MFP's as the majority of TWAIN drivers do not support two different USB devicves at the same time.

Related

openHAB for two or more home

I started exploring openHAB for my home automation. Looks to be a great application for the home automation. I want to automate two homes and want to run openHAB on one centrally placed server. Is it possible to segregate the data for my two homes and provide use based access for two homes.
Or I will have to have to instances running on my server.
Please suggest if anyone has done this earlier.
you can (I believe) provide different sitemaps, but the most importing question is how will a central openhab instance communicate with the "other" home?
Especially If you're going to use bindungs which require a piece of hardware like z-wave etc.
You can potentially play with MQTT and have a small Raspberry Pi running in the "other" home feeding the MQTT.
Assuming that there is not a hardware or range based issue with using OpenHab for two homes (e.g. z-wave USB dongle but second home is out of range) and there is network connectivity between the two houses, there are a number of ways you can accomplish this. Here is one.
The easiest would probably be to just use a naming convention for your items and groups to easily tell which house the item comes from. You would probably want to set up a separate sitemap for each house as well. If I understand your question this should segregate the data for you based on name and provide use based access for each home.
If you want to segregate the data even more thoroughly you can configure your persistence to save all the items from one house to one DB and all the others to a different one, though you will need two different persistence bindings set up (i.e. one uses rrd4j and another uses db4o). I'm not sure this provides any advantage.
The final step is getting the data from the remote house into openHab. How this is accomplished will depend on the nature of the sensors and triggers in the other house. You can use the HTTP binding, TCP/IP binding, or an MQTT broker. I've personally exposed a couple of my Raspberry Pi based sensors to openHAB using a python script and the paho library that publishes the sensor data read from the GPIO pins to an MQTT broker and it works great.
Centralization vs segregation - you have to decide, which one has more advantages and less risk.
The two houses will store data on the server (openhab2, mqtt, DB/rr4d) and each one have access to it - that must be clarified.
The network connectivity is obvious, it should be stable between the two sites. Security is another issue - not only digital, but safety of life (hvac controlling or safety appliances with network outage?).
Configuration is pretty supported in both ways, separate config files (items, rules, persistence, etc) and connectivity in a hierarchy has endless approaches and capabilities.
In the newest version of the android app you can add multiple openhab servers. Why don't just use two instances of openHAB?

Sending broadcast with Chrome Extensions

I'm coding an extension for a customer, one of the requirements is that the extension also works offline because internet services are not that reliable, my customer's business can't stop but can deal with "stale" data, thats a nice tradeoff I guess.
Therefore, I want to code some kind of distributed cache as an extension to synchronize local data among the N nodes that will be connected running the same application and thus synchronize with the real database, hosted on the internet.
In order to achieve that I imagined that I would need to make a network broadcast and listen to incoming broadcasts, then every node that starts to run my application will broadcast it's IP address and become available as a new node for the distributed cache, failover is very important here.
I googled some possibilities I initially thought but none of them will work, I guess. The first was to do it just with HTTP, the second was to use Google Native Client to write C++ code that could run network code and thus do the broadcast, but it has limitations. Right now I'm thinking to use Java Applets but I don't really know if they have some limitations related to networking or if Chrome Extensions has any limitation with Java Applets.
Any ideas on how to do it? Using some of the stuff I suggested or another approach?
You could create an NPAPI extension, which would not be restricted by Chrome at all.

Flex streaming data

I have a streaming server (for pushing data not video) setup with GraniteDS and it works great.
I have to include multiple swf files in a web page. Each of these swf files has a data table which includes streaming data(this is a specific requirement - so I really cant combine all data tables into 1 huge data table/swf file). All the swf files however, connect to the same gravity channel/streaming endpoint.
How many connections are there from the web page to the streaming server? Does each swf file start a new streaming connection? Or do all them share the same connection since they are just connecting to a single channel?
Regards,
Ravi.
Ah, very good question grasshoppa.
Essentially, each one of them has their own dedicated connection. So, if you have 6 swfs, each one would have a connection to the streaming server, so 6 connections. The problem with this is that if you're using RTMPT, your browser might block (or cycle) the extra connections since there's a limit (IE used to have a 2 connections per domain limit, FF is 10 I believe).
The question however is are they all getting streaming data at the same time? Is the data different from swf to swf? One possible solution for this would be to have one of the swf be the 'main' swf which connects to the service, gets all the data and sends it to the other swfs either with Javascript or using LocalConnection.
But, I don't know enough about your specs or why you have multiple swfs in the first place...

Let two UDP-servers listen on the same port?

I have a client which sends data via UDP-broadcast. (To let's say 127.0.0.255:12345)
Now I want to have multiple servers listening to this data. To do so on a local machine, they need to share the port 12345 for listening.
My question is, if that is possible, if there are any disadvantages and if there could be problems with this approach.
There is one alternative which unfortunately brings with a lot of overhead:
Implement some kind of registration-process. On startup, each server tells the client its port. The client then sends the messages to each port (having to send the data multiple times, some kind of handshaking needs to be implemented...)
Do you know any better alternative?
If that matters:
I'm using C++ with Boost::Asio. The software should be portable (mainly Linux and Windows).
You will have to bind the socket in both processes with the SO_REUSEPORT option. If you don't specify this option in the first process, binding in the second will fail. Likewise, if you specify this option in the first but not the second, binding in the second will fail. This option effectively specifies both a request ("I want to bind to this port even if it's already bound by another process") and a permission ("other processes may bind to this port too").
See section 4.12 of this document for more information.
This answer is referenced to the answer of cdhowie, who linked a document which states that SO_REUSEPORT would have the effect I'm trying to achieve.
I've researched how and if this option is implemented and focused mainly on Boost::Asio and Linux.
Boost::Asio does only set this option if the OS is equal to BSD or MacOSX. The code for that is contained in the file boost/asio/detail/reactive_socket_service.hpp (Boost Version 1.40, in newer versions, the code has been moved into other files).
I've wondered why Asio does not define this option for platforms like Linux and Windows.
There are several references discussing that this is not implemented in Linux:
https://web.archive.org/web/20120315052906/http://kerneltrap.org/mailarchive/linux-netdev/2008/8/7/2851754
http://kerneltrap.org/mailarchive/linux-kernel/2010/6/23/4586155
There also is a patch which should add this functionality to the kernel:
https://web-beta.archive.org/web/20110807043058/http://kerneltrap.org/mailarchive/linux-netdev/2010/4/19/6274993
I don't know if this option is existing for Windows, but by defining portable as an attribute for software which runs on Linux too, this means, that SO_REUSEPORT is OS specific and there is no portable solution for my question.
In one of the discussions I've linked it is recommended for UDP to implement a master-listener which then provides the incoming data to multiple slave-listeners.
I will mark this answer as accepted (though feeling kind of bad by accepting my own answer), because it points out why the approach of using SO_REUSEPORT will fail when trying to use it with portable software.
Several sources explain that you should use SO_REUSEADDR on windows. But none mention that it is possible to receive UDP message with and without binding the socket.
The code below binds the socket to a local listen_endpoint, that is essential, because without that you can and will still receive your UDP messages, but by default your will have exclusive ownership of the port.
However if you set reuse_address(true) on the socket (or on the acceptor when using TCP), and bind the socket afterwards, it will enable multiple applications, or multiple instances of your own application to do it again, and everyone will receive all messages.
// Create the socket so that multiple may be bound to the same address.
boost::asio::ip::udp::endpoint listen_endpoint(
listen_address, multicast_port);
// == important part ==
socket_.open(listen_endpoint.protocol());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.bind(listen_endpoint);
// == important part ==
boost::array<char, 2000> recvBuffer;
socket_.async_receive_from(boost::asio::buffer(recvBuffer), m_remote_endpoint,
boost::bind(&SocketReader::ReceiveUDPMessage, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)

Which method is better for sending a stream of images between two processes, local TCP/IP connection or Interprocess communication?

Assuming that I have to copy each image on the stream (I cannnot simply access that data with any mutex protection, it must be copied anyway), which method is better, pros/cons?
I would like to know also how much performance loss this implied compared to using the images in the same process.
Thanks
For images, IPC through shared memory would be the best option.
At least Windows' firewalls can interfere even with local TCP/IP connections. Therefore I would prefer shared memory.
In term of performance, IPC through shared memory is the best option but IMHO,
even if sockets consume a little more processing, they will give you a better result in term of evolutivity of your software.
Google "Memory Mapped Files"
I would take the VCAM example of a DirectShow capture device (available at:
http://tmhare.mvps.org/downloads/vcam.zip)
This driver appears to the O/S as a video capture device and would run in the destination process. The source would use shared memory buffers to feed it frames to inject.
While more complicated than a minimal shared-memory IPC scheme, it gives an incredible advantage in that your video pipes can connect to most media player programs, capture and editing tools, etc.
I have done this several times, including features like sinks, mixers, Freeframe effect plugins, and so on. It should take a day or two to hack together.

Resources