Why is this app blocking? - http

I just tried some code from the internet and ran it, but it blocked my emulator. The code is:
public void getcontents()
{
HttpConnection c = null;
InputStream is = null;
StringBuffer sb = new StringBuffer();
try
{
c = (HttpConnection)Connector.open("http://www.java-samples.com",Connector.READ_WRITE, true);
c.setRequestMethod(HttpConnection.GET); //default
is = c.openInputStream(); // transition to connected!
int ch = 0;
for(int ccnt=0; ccnt < 150; ccnt++) { // get the title.
ch = is.read();
if (ch == -1){
break;
}
sb.append((char)ch);
}
}
catch (IOException x){
x.printStackTrace();
}
finally{
try{
is.close();
c.close();
} catch (IOException x){
x.printStackTrace();
}
}
System.out.println(sb.toString());
}
I called the function with an OK command.
The emulator got blocked until I killed the process.
How do I solve this?

Try stepping through the code in the debugger. Or at the very least add some log statements. My guess is that the stream is waiting on data from the HTTP connection and isn't getting flushed but I haven't ran the code to verify that assertion.

The only loop I can see in your code is the for loop, which is finite (no more that 150 iterations), so that would not make the code execute indefinitely.
What I would suggest is place a number of debug output statements (output to a console or even dialog box alerts) at various points through the code. This will help you work out which line of code is causing the problem. For instance, if you put a line before and after the for loop and, when executing, only the first one is displayed, you know your problem is somewhere within the loop. You can then narrow it down by putting debug lines within the loop (including the loop number) to find out which line exactly is causing your problem.

Try checking the response code before attempting to read the response body from the server. This will either confirm the connection succeeds or print out the error response. Place the following after the call to Connector.open() :
if (c.getResponseCode() != HttpConnection.HTTP_OK) {
throw new IOException("HTTP response code: " + c.getResponseCode());
} else {
System.out.println("**Debug** : HTTP_OK received, connection established");
}
If running the code then gives no output of either the exception or the HTTP confirmation then you are likely blocking on the connection attempt (check your emulator's connectivity to the internet). If you do get the HTTP_OK then you are likely blocking on the server's HTTP response, or lack thereof. Posting a comment with your results would be a good idea.

Related

Confluent Batch Consumer. Consumer not working if Time out is specified

I am trying to consume a max of 1000 messages from kafka at a time. (I am doing this because i need to batch insert into MSSQL.) I was under the impression that kafka keeps an internal queue which fetches messages from the brokers and when i use the consumer.consume() method it just checks if there are any messages in the internal queue and returns if it finds something. otherwise it just blocks until the internal queue is updated or until timeout.
I tried to use the solution suggested here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1164#issuecomment-610308425
but when i specify TimeSpan.Zero (or any other timespan up to 1000ms) the consumer never consumes any messages. but if i remove the timeout it does consume messages but then i am unable to exit the loop if there are no more messages left to be read.
I also saw an other question on stackoverflow which suggested to read the offset of the last message sent to kafka and then read messages until i reach that offset and then break from the loop. but currently i only have one consumer and 6 partitions for a topic. I haven't tried it yet but i think managing offsets for each of the partition might make the code messy.
Can someone please tell me what to do?
static List<RealTime> getBatch()
{
var config = new ConsumerConfig
{
BootstrapServers = ConfigurationManager.AppSettings["BootstrapServers"],
GroupId = ConfigurationManager.AppSettings["ConsumerGroupID"],
AutoOffsetReset = AutoOffsetReset.Earliest,
};
List<RealTime> results = new List<RealTime>();
List<string> malformedJson = new List<string>();
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("RealTimeTopic");
int count = 0;
while (count < batchSize)
{
var consumerResult = consumer.Consume(1000);
if (consumerResult?.Message is null)
{
break;
}
Console.WriteLine("read");
try
{
RealTime item = JsonSerializer.Deserialize<RealTime>(consumerResult.Message.Value);
results.Add(item);
count += 1;
}
catch(Exception e)
{
Console.WriteLine("malformed");
malformedJson.Add(consumerResult.Message.Value);
}
}
consumer.Close();
};
Console.WriteLine(malformedJson.Count);
return results;
}
I found a workaround.
For some reason the consumer first needs to be called without a timeout. That means it will wait for a message until it gets at least one. after that using consume with timeout zero fetches all the rest of the messages one by one from the internal queue. this seems to work out for the best.
I had a similar problem, updating the Confluent.Kafka and lidrdkafka libraries from version 1.8.2 to 2.0.2 helped

Consume one message per time in Netcore and Kafka

I'm trying to making a Consumer Kafka using NET CORE 2.1, this consumer should read one message compare timestamp and commit or not, so this consumer can stay on same message until this validation is true. See my code:
while(true)
{
try
{
var cr = consumer.Consume(TimeSpan.FromMilliseconds(4000));
if (cr == null)
{
Console.WriteLine("Exiting ... no messages to process");
break;
}
double totalSeconds = (DateTime.Now - cr.Timestamp.UtcDateTime).TotalSeconds;
Console.WriteLine($"TotalSeconds = {totalSeconds} , Resume = {resumeTimeSeconds}");
if (totalSeconds > resumeTimeSeconds)
{
Console.WriteLine($"Message = {cr.Value}");
consumer.Commit();
}else
{
Console.WriteLine($"Skipping... {cr.Value}");
continue;
}
}
catch (ConsumeException e)
{
Console.WriteLine($"Error occured: {e.Error.Reason}");
}
}
So, I have 10 messages in my topic and LAG is 2. I want to the next message is called only if i Commit() the previous message, but the consumer.Consume() method always call the next message.
The consumer commit comes into play only when your consumer start ( or recover from a crash). Your consumer will internally keep track of the last received offset for each partition.
What you can do is maybe use seek() to get back to the previous offset you just tried to process, and then retry.
Yannick

How to try catch block in Jmeter.Webdriver webdriver Sampler

I want to do the exception handling in Jmeter.Webdriver Webdriver Sampler
Please let me , How to use try/catch block in Jmeter.Webdriver webdriver Sampler ?
You can do this via normal JavaScript try block, here is an example of taking a screenshot when error occurs:
var pkg = JavaImporter(org.openqa.selenium)
var support_ui = JavaImporter(org.openqa.selenium.support.ui.WebDriverWait)
var conditions = org.openqa.selenium.support.ui.ExpectedConditions
var wait = new support_ui.WebDriverWait(WDS.browser, 5)
var exception = null
WDS.sampleResult.sampleStart()
try {
WDS.browser.get('http://example.com')
wait.until(conditions.presenceOfElementLocated(pkg.By.linkText('Not existing link')))
} catch (err) {
WDS.log.error(err.message)
var screenshot = WDS.browser.getScreenshotAs(pkg.OutputType.FILE)
screenshot.renameTo(java.io.File('screenshot.png'))
exception = err
} finally {
throw (exception)
}
WDS.sampleResult.sampleEnd())
Don't forget to "throw" the error after you handle it otherwise it will be "swallowed" and you get a false positive result.
See The WebDriver Sampler: Your Top 10 Questions Answered article for more tips and tricks
Surround the code with try block and add catch block at the end by giving variable name to capture the exception. (in the example, it is exc)
try as follows:
try{
WDS.sampleResult.sampleStart()
WDS.browser.get('http://jmeter-plugins.org')
var pkg = JavaImporter(org.openqa.selenium)
WDS.browser.findElement(pkg.By.id('what')) // there is no such element with id what
WDS.sampleResult.sampleEnd()
}
catch(exc){ //exc variable name
WDS.log.error("element not found" + exc)
}
in the JMeter log, you can see the complete trace of NoSuchElementException, which is raised when trying to find the element by id with the values as what, which is not present in the HTML.
Note: use View Results in Table to see the Sampler response time.
Reference:
https://jmeter-plugins.org/wiki/WebDriverSampler/
Reference Image:
It is same as how do you do in other IDEs like eclipse.
you can see below code
//try block starts here
try{
wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("element"))).click();
}
catch(Exception e)
{
WDS.log.info("Exception is : " +e);//you can print the exception in jmeter log.
}
double quotes should be replaced with the single quote if you are using javascript Since the BeanShell is easy and it is similar to java use BeanShell as much as possible

Asynchronous hive query execution : OperationHandle gets cleaned up at server side as soon as the query initiator client disconnects

Is it possible to execute a query asynchronously in hive server?
For eg, How can I /Is it possible to do something like this from the client-
QueryHandle handle = executeAsyncQuery(hiveQuery);
Status status = handle.checkStatus();
if(status.isCompleted()) {
QueryResult result = handle.fetchResult();
}
I also had a look at How do I make an async call to Hive in Java?. But did not help. The answers were mostly around the thrift clients taking a callback argument.
Any help would be appreciated. Thanks!
[EDIT 1]
I went through the HiveConnection.java in hive-jdbc. hive-jdbc by default uses the async thrift APIs. Hence it submits a query and polls for result sets (look at HiveStatement.java). Now i am able to write a piece of code which is purely non blocking. But the problem is as soon as the client disconnect the foot print about the query is lost.
Client 1
final TCLIService.Client client = new TCLIService.Client(createBinaryTransport(host, port, loginTimeout, sessConf, false)); // from HiveConnection.java
TSessionHandle sessionHandle = openSession(client) // from HiveConnection.java
TExecuteStatementReq execReq = new TExecuteStatementReq(sessionHandle, sql);
execReq.setRunAsync(true);
execReq.setConfOverlay(sessConf);
final TGetOperationStatusReq handle = client.ExecuteStatement(execReq)
writeHandleToFile("~/handle", handle)
Client 2
final TGetOperationStatusReq handle = readHandleFromFile("~/handle")
final TCLIService.Client client = new TCLIService.Client(createBinaryTransport(host, port, loginTimeout, sessConf, false));
while (true) {
System.out.println(client.GetOperationStatus(handle).getOperationState());
Thread.sleep(1000);
}
Client 2 keeps printing FINISHED_STATE as long as Client 1 is alive. But if client 1 process completes or gets killed, client 2 starts printing null which means hiveserver2 is cleaning up the resources as soon as a client disconnects.
Is it possible to configure hiveserver2 to configure this clean up process based on time or something?
Thanks!
Did some research and figured out that this happens only with binary transport (tcp)
#Override
public void deleteContext(ServerContext serverContext,
TProtocol input, TProtocol output) {
Metrics metrics = MetricsFactory.getInstance();
if (metrics != null) {
try {
metrics.decrementCounter(MetricsConstant.OPEN_CONNECTIONS);
} catch (Exception e) {
LOG.warn("Error Reporting JDO operation to Metrics system", e);
}
}
ThriftCLIServerContext context = (ThriftCLIServerContext) serverContext;
SessionHandle sessionHandle = context.getSessionHandle();
if (sessionHandle != null) {
LOG.info("Session disconnected without closing properly, close it now");
try {
cliService.closeSession(sessionHandle);
} catch (HiveSQLException e) {
LOG.warn("Failed to close session: " + e, e);
}
}
}
The above stub (from ThriftBinaryCLIService) gets executed through this piece of code from TThreadPoolServer which is used by ThriftBinaryCLIService.
eventHandler.deleteContext(connectionContext, inputProtocol,
outputProtocol);
Apparently http transport (ThriftHttpCLIService) has a different strategy of cleaning up operation handles (not greedy like tcp)
Will check with hive community on this to understand a bit more and see if there is an issue addressing this already.

WAS Liberty Profile won't run external process (using Runtime.getRuntime().exec(cmd) )

As the title suggests, WLP won't run the process- it won't return anything to the process input stream nor to error stream.
If anyone knows about a configuration that needs to take place I would love to know..
(note the process Can run by running the command manually - in addition, the whole thing runs smooth on tomcat8 so..)
EDIT 1:
The problem was not the command execution under WLP as you guys stated, so I accepted the answer.
The problem is different : I sent a media file to a multipart servlet and stored it in a file on disk using the following code:
InputStream is = request.getInputStream();
String currentTime = new Long(System.currentTimeMillis()).toString();
String fileName = PATH + currentTime + "." + fileType;
File file = new File(fileName);
// write the image to a temporary location
FileOutputStream os = new FileOutputStream(file);
byte[] buffer = new byte[BUFFER_SIZE];
while(true) {
int numRead = is.read(buffer);
if(numRead == -1) {
break;
}
os.write(buffer, 0, numRead);
os.flush();
}
is.close();
os.close();
and the file gets saved along with the following prefix:
While this does not happen on tomcat8 (using the same client)..
something is not trivial in the received input stream. (Note its a multipart servlet that set up via #MultipartConfig only)
Hope this post will help others..
guys,thanks for your help!
This will work in Liberty. I was able to test out the following code in a servlet and it printed the path of my current directory just fine:
String line;
Process p = Runtime.getRuntime().exec("cmd /c cd");
BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream()));
while ((line = input.readLine()) != null) {
System.out.println(line);
}
input.close();
Start with a simple command like this, and when you move up to more complex commands or scripts, make sure you are not burying exceptions that may come back. Always at least print the stack trace!

Resources