How does the service bus close an opened cursor by a database adapter? - oracle-service-bus

I need to make sure that the service bus closes the cursor. I cannot close the cursor in the PL/SQL procedure where the client opens it as it would be closed before it' s read from it; it needs to be afterwards and so it needs to be the service bus that handles closing it.

Related

How should I really use QLocalSocket and QLocalServer?

Qt example about using QLocalSocket/QLocalServer for IPC is not that clear as it should be.
I have arrived (after a lot of what-if ideas) to next conclusions:
Sockets for windows named pipes are designed to deliver one message at a time. It can not be used in continuous for never-ending data stream as pipe stores all data sent inside regardless if it was read or not causing huge memory waste. So basic use-case looks like: reciever connects to pipe -> sender sends one frame of data -> someone of them close socket hanlde until reciever wants more (and again new connection must be established)
QLocalSocket cannot be saved as instance. This is confusing. First connection is done perfectly. I create a receiver class with QLocalSocket within. I connect this socket to named pipe and read data. I close the socket (on the sender side) and connect it again to the pipe to read next data frame. Server sees the new connection, but once I try to write into - "Socket is closing" error appears. Looks like some data is remaining (may be HANDLE to windows socket is saved from previous connection).
With Process Explorer I've found that socket is connected only within my function inside class (slot with only `connectToServer' function), but once control is returned to Qt event loop - connection to named pipe disappears before server has any possibility to write into it. Once again: this happens only second time, if the socket has ever been closed. If I delete socket and create new one (on the heap) - it works like it should.
There are some problems with detection if QLocalSocket is valid while one side is closed unexpectedly.
This happens when I force close receiver executable after connection has been established. Server has already stored pointer to socket but it is no more valid. Looks like isValid() or other methods from QLocalSocket could help me, but they fail too. In practice when client is forced closed QLocalSocket instance on sender side still exists, but has messed up private implementation: in qiodevice.cpp : 1607 where CHECK_WRITABLE(write, qint64(-1)); is performed d_ptr is not valid pointer anymore and has something like 0xafafafaf. Looks like private implementation is controlled outside Qt event loop causing this to happen.
So. After all said. It looks like I am doing something wrong. So how do QLocalSocket should really be used for never-ending data stream in cases when client must receive data as soon as it exists?
UPD: please find code example here. Also note socket_by_instance branch to understand my second note.
Some notes about the code: Let's say I have some data stream incoming on once every 10 ms that I must process on the sender side (in any case) and sent it to receiver(s) if it(they) exists.
What I do not like in this code is that on the receiver side I need to do new/deleteLater for every socket connection (100 times per second). Knowing that receiver always has one connection and it is always opened and always connected to one the same socket - I think this is waste of time doing new/delete here. Please look at socket_by_instance branch to see my try to avoid this issue.

Remote server push notification to arduino (Ethernet)

I would want to send a message from the server actively, such as using UDP/TCPIP to a client using an arduino. It is known that this is possible if the user has port forward the specific port to the device on local network. However I wouldn't want to have the user to port forward manually, perhaps using another protocol, will this be possible?
1 Arduino Side
I think the closest you can get to this is opening a connection to the server from the arduino, then use available to wait for the server to stream some data to the arduino. Your code will be polling the open connection, but you are avoiding all the back and forth communications to open and close the connection, passing headers back and forth etc.
2 Server Side
This means the bulk of the work will be on the server side, where you will need to manage open connections so you can instantly write to them when a user triggers some event which requires a message to be pushed to the arduino. How to do this varies a bit depending on what type of server application you are running.
2.1 Node.js "walk-through" of main issues
In Node.js for example, you can res.write() on a connection, without closing it - this should give a similar effect as having an open serial connection to the arduino. That leaves you with the issue of managing the connection - should the server periodically check a database for messages for the arduino? That simply removes one link from the arduino -> server -> database polling link, so we should be able to do better.
We can attach a function triggered by the event of a message being added to the database. Node-orm2 is a database Object Relational Model driver for node.js, and it offers hooks such as afterSave and afterCreate which you can utilize for this type of thing. Depending on your application, you may be better off not using a database at all and simply using javascript objects.
The only remaining issue then, is: once the hook is activated, how do we get the correct connection into scope so we can write to it? Well you can save all the relevant data you have on the request to some global data structure, maybe a dictionary with an arduino ID as index, and in the triggered function you fetch all the data, i.e. the request context and you write to it!
See this blog post for a great example, including node.js code which manages open connections, closing them properly and clearing from memory on timeout etc.
3 Conclusion
I haven't tested this myself - but I plan to since I already have an existing application using arduino and node.js which is currently implemented using normal polling. Hopefully I will get around to it soon and return here with results.
Typically in long-polling (from what I've read) the connection is closed once data is sent back to the client (arduino), although I don't see why this would be necessary. I plan to try keeping the same connection open for multiple messages, only closing after a fixed time interval to re-establish the connection - and I hope to set this interval fairly high, 5-15 minutes maybe.
We use Pubnub to send notifications to a client web browser so a user can know immediately when they have received a "message" and stuff like that. It works great.
This seems to have the same constraints that you are looking at: No static IP, no port forwarding. User can theoretically just plug the thing in...
It looks like Pubnub has an Arduino library:
https://github.com/pubnub/arduino

NSStream does not call it's delegate when closed everytime

I have a couple of NSStreams (in & out TLS) to a server, I can send and receive data via them just fine but after a while maybe 5 minutes after without any traffic, connection seems to be closed on it's own but my delegate does not called with NSStreamEventEndOccured and I only get NSStreamEventErrorOccurred AFTER I try to send something.
COnnection shouldn't close on it's own in the first place because
-app is still active
-device is not locked
-wifi it's using does not disconnect
-Remote server has a long tcp lifetime and SO_KEEPALIVE flag active, iPhone side also SO_KEEPALIVE active on it's native socket handles.
Still, I'm more concerned about why doesn't my delegate get called than my connection getting closed.
Any ideas?
Thanks

How should I work this threading issue in Qt out?

I have a GUI program , that has a QLocalServer inside , each time it got a connection from client , it will popup a dialog asking user what to do.
But when there's multiple connection received simultaneously , a bunch of dialogs popus all together .. is there a way to queue them ?
I tried to use QMutex , but that stuck whole GUI thread.
What's the common / correct solution to this ?
Just use a queue data structure, i.e. put the incoming connections into a queue data structure and then whenever a dialog is closed (say) check if there are more connections on the queue; if yes, process next one. When you get a connection in and the queue is empty process it immediately. QMutex blocks the GUI thread because most likely you haven't spawned any additional threads, and it is actually a callback from QLocalServer and not a new thread that notifies you of an inbound connection.

Resume the same TCP connection

I have a multi-process TCPServer which creates (by fork() on linux) one process (child) per client's request, and in the meanwhile it is listening other connection's request. So I have a 1 to 1 mapping between client and server.
Suppose that one client crashes...is it possible to reconnect it to the same child server process?In other terms..is it possible to restore a pre-exhistent connection which is failed or the attempts to reconnect create a new connection (and then a new child server process)? thank you...
Without some knowledge (by the forker) of the interior session-related details (of the forkee), you have to make assumptions about external details being adequate to determine which remote connections get re-associated with which local connection end-points.
You could change the way things work in your application, though. Oracle SQL*Net does this on some platforms (due to platform limitations).
The initial connection to the TCPServer causes a fork and then opens up a new listening socket, sends back a redirection instruction to connect to the new listening socket & identifying details (to avoid someone else connecting and impersonating the original connector). The client then connects to the new socket, and uses this socket to do any re-connections upon disconnects before their time.
I have done something very similar to this in .NET platform. If you have something which is unique for every connection (for example IMEI of the connecting device this can be done).You should have some global two-dimensional array variable with combination of ProcessID and IMEI. So when device is disconnected and then the device reconnects you only search in this array for this IMEI and you have the process for this device. You should be very carefull with this global variable.
Edited:
I gave an example of some unique identifier. In my case that was the IMEI of the devices. In your case that could be something else which you know it is unique.
I had to do this because I had very big problem with devices breaking up the connection. Every new device was new connection so afterward I ended up with very big CPU usage.
Maybe you can refer https://eternalterminal.dev/howitworks/. Both the client and the server need change.

Resources