Error communicating with impalad: TSocket read 0 bytes - runtime-error

when i interact with impala i randomly get the error:
Error communicating with impalad: TSocket read 0 bytes.
The error is almost always related to tables that are truncated and repopulated every day.
I've tried refreshing the tables, but that doesn't seem to fix the problem.
Any suggestions?
Many thanks in advance.

Related

How to reverse engineer buffer data received from a serial port

I am trying to decode some buffer data that I have received from a serial port. To me the data seems to make no sense - I am not even sure if I am splitting the messages up correctly.
The data comes from a concrete crusher and while the concrete is being crushed we get an almost continuous stream of data. I get about 10 "messages" a second (but this might be multiple messages included in each message) and I am splitting them up by waiting 50 ms after each message. The data looks like this:
[0,0,0,224,0,224,0,0,224,0,0,224,0,0,0,0,0,0,0,224,0,0,0,0,224,0,0,0,224,0,224,0,0,224,0,0,0,224,0,0,0,0,0,0,0,224,0,0,0,224,0,0,224,0,0,224,0,224,0,0,224,0,0,0,0,0,0,0,0,224,224,0]
[0,0,0,224,0,224,0,0,224,0,0,224,0,0,0,0,0,0,0,224,0,0,0,0,0,0,0,224,0,224,0,0,0,0,0,0,0,0,0,0,0,0,0,224,0,0,0,224,0,0,224,0,0,224,0,0,0,0,224,0,0,0,0,0,0,0,0,224,224,0]
[0,0,0,224,0,224,0,0,224,0,0,224,0,0,0,0,0,0,0,224,0,224,0,0,224,0,0,0,224,0,224,224,224,0,0,0,224,0,0,0,0,0,0,0,0,224,0,0,0,224,0,224,224,224,0,0,224,0,0,0,224,0,0,0,224,0,0,0,0,0,224,224,0]
as you can see there are no values at all other than 0 and 224...
The last message is:
[0,0,0,0,0,0,224,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,224,0,0,224,0,0,0,0,0,0,0,0,0,0,0,0,0,0,224,0,0,0,0,224,224,0,0,224,0,0,224,0,0,0,224,0,0,0,0,0,0,0,0,0,0,0,224,0,0,0,0,0,224,0,224,224,0,0,224,0,0,0,224,224,0]
the value displayed on the machine was 427.681kN but I can't see any way that this data can produce that.
Each message ends with 224,224,0 so I am wondering if that is the split sequence?
I am getting this data with node-red and this is the format that I can copy it from the debug panel in.
I am very lost so any guidance or directions that I can look in would be much appreciated.

rnoaa station data pull times out

I'm trying to pull NOAA data using R. I've done this pull before but all of a sudden it's not working. I have rnoaa loaded (package/library) and I have an authorization key from the NOAA. I try to run this command:
station_data <- ghcnd_stations()
And I get this error:
Error in curl::curl_fetch_memory(x$url$url, handle = x$url$handle) : Timeout was reached: Operation timed out after 10000 milliseconds with 0 out of 0 bytes received
I found pages online that suggested updating everything. First I updated all my packages and then updated to a new version of R. All that done, it's still giving me a timeout message with this simple command. I know that the ghcnd pull sometimes take a while, but it's timing out after about 10 seconds. Is this just an noaa issue (as is sometimes the case) and I should try again tomorrow? Or is there something I can actually do to make this work? Can I change the timeout period so that it waits longer? Is the NOAA just overloaded because of the hurricane?
Looks like it was just something on the NOAA end of things. It took a day, but it's finally back up and running properly.

Issue in using pagination in R3 Corda

We are querying the vault and fetching the records which are more than 200 per nodes but getting the following error -
Please specify a PageSpecification as there are more results [201] than the default page size [200]
After increasing the pagination to 400 from the default 200, I am getting java heap out of memory error.
Can you please help.
You can increase the heap size available to the node by following the instructions here: https://docs.corda.net/running-a-node.html#starting-an-individual-corda-node.
If you continue to get an out-of-memory exception, you should either increase the memory allocated to the Java process further, or inspect your node's database to see if the transaction objects are abnormally large.
you can always add pagination logic for fetching the large number of records. please find the this link :
https://docs.corda.net/docs/corda-os/4.5/api-vault-query.html

Kill turtle which causes runtime error

I'm curious as to whether there is a way of reporting the turtle which causes a runtime error?
I have a model which includes many agents and will run fine for hours, however sometimes a runtime error will occur. I have tried a few different things to fix it but always seems like an error occurs to the point I can't spare the time to fix it due to deadlines.
As the occurrence is so rare the easiest solution is to just write in the command center ask turtle X [die] after which I click GO and the problem is 'fixed'.
However I was wondering if anyone knew of a way to kill the turtle producing the error automatically every time a runtime error occurred to save me entering this manually.

Is it possible that both TA1 and 999 missing in BizTalk when inbounding a bad formatted X12 file?

This is the 1st time I meet this.
Normally when we received an inbound X12 file. A 999 will always be generated (by configuration in BizTalk) and in case of interchange level error occurs, a TA1 will be created.
But today I got a X12 file with some formatting errors, the error popup in BizTalk is :
Delimiters are not unique, field and component seperator are the same.
The sequence number of the suspended message is 1.
I am expecting to have a 999 or TA1 generated to reject the inbound file. but none of these 2 files created.
My question:
What file shall I expect to created for this kind of error? 999 or
TA1?
Is this a bug or normal behavior for BizTalk?
If this is normal, what is the best mechanism to catch this error and
response back to trading partner.
You should definitely not expect a 999 (which would be transaction set specific), because this error prevents BizTalk from parsing the transaction set at all - it doesn't have a reliable way to determine what kind of transaction it is.
A TA1 could be appropriate, but this seems like a grey area - might be worth contacting Microsoft support about. The documentation indicates that an invalid ISA should result in a negative TA1, but the error codes for TA1 don't list this particular scenario as one that's support (or at all).
A possible work around would be capturing this kind of message, generating a TA1 for it, and routing it back to the TP. However, having non-unique delimiters may make it impossible to determine the TP from the message itself, even though you might be able to determine it from context (but maybe not if multiple trading partners use the same ports/locations). My guess is that's why BizTalk isn't handling it properly out of the box. To be honest, unless this happens fairly frequently it'd probably be easier/more realible to deal with it on an exception basis with human intervention.
As far as capturing the message, I'm thinking you'd need a custom pipeline component - perhaps even subclassing the EdiDisassembler so you can catch this particular exception and deal with it.

Resources