Robolectric: Tests are run on UI Thread (id=1), blocks my BroadcastReceiver - deadlock

I have a problem with Robolectric. I don't understand how it works.
The tests seem to run on Android's UI Thread (Thread ID=1). However if I send a broadcast from the test code, and then wait for it to arrive in a BroadcastReceiver in the test, then the BroadcastReceiver will be blocked because it is also on Thread ID=1. How can I test such behaviour then?

Related

Sleeping or blocking in interrupt handler

Why sleeping or blocking not allowed in interrupt handler.
Assume i have following setup.
Single core system.
Developing a bare-metal application using FreeRTOS.
There are many FreeRTOS APIs which cannot be called from ISR context as they may block waiting for
events to occur. So this means we cannot put the ISR in blocked state.
If you block in an interrupt handler, it can commonly not be triggered again. And all other interrupts of same and lower priority, and the non-interrupt part of your program are blocked, too.
Final line: don't do it.

Client Reconnection

My understanding of the (JavaScript) hub client is that if a connection is lost, it enters a 'Reconnecting...' phase which attempts to reconnect. If it can't do so, it will enter a 'Disconnected' state which is where it'll stay until asked to start again.
How long is the 'Reconnecting...' phase meant to last before it gives up? I've read 40 seconds before, but my client seems to take much less time - about 10, maybe less. [EDIT: Nevermind this part, I had configured a 10 disconnect on the server as a test... and forgot. I understand this is set by the server during the negotiate. Makes sense!] ... I'd prefer to have the client continually retry until it is told to abort - can this be done, and would it cause issues?
Another question; during the Reconnecting... phase, if I attempt to call a hub method (again, in JS) it never seems to complete. I'm using the returned Deferred to check for 'done' and 'fail' events, but neither seems to get called. Is this by design?
Thanks.
You can definitely have it continually reconnect.
Handle the disconnected event on the client and call connection.start:
$.connection.hub.disconnected(function() {
setTimeout(function() {
$.connection.hub.start();
}, 5000); // Re-start connection after 5 seconds
});
The only issues this would cause is that you could potentially be triggering infinite requests to a server that isn't there for client machines. This becomes even more troublesome when you introduce the mobile market into the situation (drains battery like crazy).
When you attempt to call a hub method while reconnecting SignalR will try to send your command. Since there are 2 channels, one for receiving data and one for sending, (for all transports except web sockets) in some cases it can still be possible to send requests while your offline. Therefore SignalR does not know if a request fails until the browser tells it that it could not successfully make the request.
Hope this helps!
I might have a clue... Touching the Web.config produces an appPool Recycle, meaning that a new worker process will be created for new requests while the existing process will continue for a while until the remaining requests end or the timeout is reached. Request that do not end in the timeout period are terminated.
Signalr client reconnects to the new process while the long running task is running in the old process, so when on the long running task you do
GlobalHost.ConnectionManager.GetHubContext<ForceHub>();
you actually get a reference for "old" hub while the client is connected to the "new" hub.
That's why the test preformed by Wasp worked: he was making a new request to publish on the signalr hub that was processed in the newly created worker process.
You could try to configure a singalr backplane (https://www.asp.net/signalr/overview/performance/scaleout-in-signalr), it’s really easy to configure it using Sql Server (https://www.asp.net/signalr/overview/performance/scaleout-with-sql-server). The backplane should be capable of connect the two worker processes and hopefully you will get the notification on the client.
If this is the problem, notifications generated by new requests will work even without the backplane. Notice that the real purpose of the backplane is to scale out signalr, this is, to connect a farm of WebServers between them.
Also keep in mind that running long-running task inside IIS is as task hard to achieve as, among other things, IIS does regular appPool recycles and has timeout limits for the requests to execute. I recommend that you read the following post: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
“If you think you can just write a background task yourself, it's likely you'll get it wrong. I'm not impugning your skills, I'm just saying it's subtle. Plus, why should you have to?”
Hope this helps

QThread::start: Thread termination error

I'm using an OpenSSL library in multi-threading application.
For various reasons I'm using blocking SSL connection. And there is a situation when client hangs on
SSL_connect
function.
I moved connection procedure to another thread and created timer. On timeout connection thread is terminated using:
QThread::terminate()
The thread is terminable, but on the next attempt to start thread I get:
QThread::start: Thread termination error:
I checked the "max thread issue" and that's not the case.
I'm working on CentOS 6.0 with QT 4.5, OpenSSL 1.0
The question is how to completely terminate a thread.
The Qt Documentation about terminate() tells:
The thread may or may not be terminated immediately, depending on the operating systems scheduling policies. Use QThread::wait() after terminate() for synchronous termination.
but also:
Warning: This function is dangerous and its use is discouraged. The thread can be terminated at any point in its code path. Threads can be terminated while modifying data. There is no chance for the thread to clean up after itself, unlock any held mutexes, etc. In short, use this function only if absolutely necessary.
Assuming you didn't reimplement QThread::run() (which is usually not necessary) - or if you actually reimplemented run and called exec() yourself, the usual way to stop a thread would be:
_thread->quit();
_thread->wait();
The first line tells the thread asynchronously to stop execution which usually means the thread will finish whatever it is currently doing and then return from it's event loop. However, quit() always instantly returns which is why you need to call wait() so the main thread is blocked until _thread was actually ended. After that, you can safely start() the thread again.
If you really want to get rid of the thread as quickly as possible, you can also call wait() after terminate() or at least before you call start() again

QTimer, QThread, and TCP messaging

Qt 4.8, Windows XP:
I have a thread that manages my TCP messages and opens / maintains / closes the socket at the appropriate times.
This same thread starts a QTimer, 200 ms, defined in my thread's data, that pumps an event in my thread's class once (if) the socket is open. So the timer and its event belong to the thread, as best I understand the idea.
The QTimer timeout event sends a TCP message through the port belonging to the thread, it's a keep-alive message for this particular hardware item. Has to be sent regularly or the device "goes away" which won't do.
When the message is sent, I get this error:
"QSocketNotifier: socket notifiers cannot be enabled from another thread"
As far as I can tell, I am sending the message from the same thread and would expect any signals, etc., to be owned / handled etc. by it.
Can anyone tell me what I'm missing here?
PS: The message is sent, the device does stay alive... it's just that I'm getting this runtime error on the Qt error console and I'm very concerned that there are internal problems lurking because of it.
The message does NOT occur running under OS X 10.6. I don't know why.
Ok, here's the scoop. QTimer, for reason only known to the designers of QT, inherits the context of the parent of the thread. Not the context of the thread it's launched from. So when the timer goes off, and you send a message from the slot it called, you're not in the thread's context, you're in the parents context.
You also can't launch a thread that is child of THAT thread, so that you can fire a timer that will actually be in the thread you want. Qt won't let it run.
So, spend some memory, make a queue, load the message into the queue from elsewhere, watch the queue in the thread that owns the TCP port, and send em when ya got em. That works.

how can i keep my jvm from exiting while netty client connection is open?

I have an API which uses netty to open client connection to a tcp server. The server may send data to the client at any time. I'm facing the following scenario:
Client connects to server
Sends data to server
Disconnects and the JVM exist (not sure happens first)
This is what I expect:
Client connects to server
Sends data to server
Client simply keeps the connections open, waiting to receive data or for the user of client API to send data.
This is an outline of my connection method (obviously there is a much larger API around it):
```
public FIXClient connect(String host, int port) throws Throwable {
...
ChannelPipeline pipe = org.jboss.netty.channel.Channels.pipeline(...);
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ClientBootstrap bootstrap = new ClientBootstrap(factory);
bootstrap.setPipeline(pipe);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port));
//forcing the connect call to block
//don't want clients to deal with async connect calls
future.awaitUninterruptibly();
if(future.isSuccess()){
this.channel = future.getChannel();
//channel.getCloseFuture();//TODO notifies whenever channel closes
}
else{
throw future.getCause();//wrap this in a more specific exception
}
return this;
}
```
That has nothing todo with netty... You need to make sure your "main" method will not exist if you call it from there. Otherwise it the job of the container..
There's a couple of ways you can do this, but one thing I have observed, is that with this code:
ChannelFactory factory = new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
... if you make a successful connection, your JVM will not shutdown of it's own accord for some time until you force it (like a kill) or you call a releaseExternalResources() on your channel factory. This is because:
The threads created by Executors.newCachedThreadPool() are nonDaemon threads.
At least 1 thread would be created once you submit your connection request.
The cached thread pool threads have a keep alive time of 60 seconds, meaning they don't go away until they've been idle for 60 seconds, so that would be 60 seconds after your connect and send (assuming that they both completed).
So I'm not sure if you're diagnosing the issue correctly. Having said that, I recommend you handle the task this this way:
Once you boot in your main method (in the main thread)
Now launch all your actual useful work in new threads.
Once the useful threads have been launched, in the main thread, call Thread.currentThread().join(). Since main is always non-dameon, you have made sure the JVM will not shutdown until you're good and ready.
At some point, unless you want to kill -9 the JVM as a shutdown strategy, you will want a controlled shutdown, so you can add a shutdown hook to shutdown Netty and then interrupt the main thread.
I hope that's helpful.

Resources