How to simulate connection timeout scenario with Wiremock [duplicate] - integration-testing

There are two scenarios in a single feature file, scenario 1 executes without any issues but getting the below error while executing scenario 2
ERROR com.intuit.karate - **java.net.SocketTimeoutException**: Read timed out, http call failed after 31237 milliseconds for URL: projectURL
com.intuit.karate.exception.KarateException: javascript function call failed:
java.net.SocketTimeoutException: Read timed out

Try increasing the timeouts: https://github.com/intuit/karate#configure
* configure readTimeout = 10000

Related

paws access to S3 timed out

I am trying to use the paws library in R to access an S3 bucket from my Windows machine:
svc <- paws::s3(config = list(credentials = list(profile=“my_profile”), region=“us-east-1”))
svc$list_objects(Bucket=“my-bucket”)
However, this results in a timeout:
Error in curl::curl_fetch_memory(url, handle = handle): Timeout was
reached: [my-bucket.s3amazonaws.com] Operation timed out after 10003
milliseconds with 0 out of 0 bytes received.
This is perplexing because the CLI works fine:
aws —profile=my_profile s3 ls s3:://my-bucket
What would cause paws/curl to timeout if the AWS CLI works?
I tried to limit the number of keys returned with no beneficial effect:
svc$list_objects(Bucket=“my-bucket”, MaxKeys=10)
The list_objects method actually worked for another bucket, that contained no objects, in the same account:
svc$list_objects(Bucket="empty-bucket")
Therefore, I hypothesize that the timeout is due to the fact that the original bucket being queried returned more object descriptors than could be fetched within the internal 10 second limit set by paws.

SocketTimeoutException when calling load for DynamoDBMapper

I am getting sometimes this error when calling load for DynamoDBMapper:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1593)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:82)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:166)
at com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper.getCredentials(EC2ContainerCredentialsProviderWrapper.java:75)
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1251)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:827)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:777)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:5110)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:5077)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeGetItem(AmazonDynamoDBClient.java:2197)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:2163)
at com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:431)
at com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:448)
at com.amazonaws.services.dynamodbv2.datamodeling.AbstractDynamoDBMapper.load(AbstractDynamoDBMapper.java:80)
I have 2 timeouts to PUT /latest/api/token, then I get a response. I am not sure what is wrong exactly or why do I have this behavior sometimes, but this leads to latency in my application.
Do I need to modify something in the settings? Is it related to DynamoMapper? Should I use low level Dynamo API?
These issues can occur when:
You call a remote API that takes too long to respond or that is unreachable.
Your API call doesn't get a response within the socket timeout.
Your API call doesn't get a response within the timeout period of your Lambda function.
If you make an API call using an AWS SDK and the call fails, the SDK automatically retries the call https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-retry-timeout-sdk/. How long and how many times the SDK retries is determined by settings that vary among each SDK. Here are the default values of these settings:
see the SDK client configuration documentation: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html

Error in curl::curl_fetch_memory(url, handle = handle) : Timeout was reached: Send failure: Connection was reset

I am trying to extract comments from 5000 videos from Youtube's API.
My code works perfectly when I download 5 videos or more but when I plug the entire list of videos that I want, it throws the below error after running for hours. I am not sure what it means or if it is a memory problem. Thank you!
Line of code in R:
comments_1 <- CollectDataYoutube(video1,key,writeToFile = FALSE)
video1: is a list of 5 thousand videos codes that I have so that the API extracts comments from it directly.
key: is my API key stored in an object
Error message:
Error in curl::curl_fetch_memory(url, handle = handle) :
Timeout was reached: Send failure: Connection was reset
RCurl can only keep open an internet connection for so long to stream data in. It's an internet latency problem, not a memory problem. You'd be better off batching your requests in smaller batches and then combining them.

Mongolite Error: Failed to read 4 bytes: socket error or timeout

I was trying to query a mongo database for all of the id's included in the database so I could compare the list to a separate data frame. However, when I attempt to find all sample_id fields I'm presented with:
Error: Failed to read 4 bytes: socket error or timeout
An example of the find query:
library(mongolite)
mongo <- mongo(collection,url = paste0("mongodb://", user,":",pass, "#", mongo_host, ":", port,"/",db))
mongo$find(fields = '{"sample_id":1,"_id":0}')
# Error: Failed to read 4 bytes: socket error or timeout
As the error indicates, this is probably due to some internal socket timeout problem due to the large amount of data. However, in the mongo documentation the default is set to never timeout.
socketTimeoutMS:
The time in milliseconds to attempt a send or receive on a socket before the attempt times out. The default is never to timeout, though different drivers might vary. See the driver documentation.
So my question was why does this error occur when using mongolite? I think I've solved it but I'd welcome any additional information or input.
The simple answer is that, as indicated in the above quote from the mongo documenation, "different drivers might vary". In this case the default for mongolite is 5 minutes, found in this github issue, I'm guessing it's related to the C drivers.
The default socket timeout for connections is 5 minutes. This means
that if your MongoDB server dies or becomes unavailable it will take 5
minutes to detect this. You can change this by providing
sockettimeoutms= in your connection URI.
Also noted in the github issue is a solution which is to increase the sockettimeoutms in the URI. At the end of the connection URI you should add ?sockettimeoutms=1200000 as an option to increase the length of time (20 minutes in this case) before a socket timeout. Modifying the original example code:
library(mongolite)
mongo <- mongo(collection,url = paste0("mongodb://", user,":",pass, "#", mongo_host, ":", port,"/",db,"?sockettimeoutms=1200000"))
mongo$find(fields = '{"sample_id":1,"_id":0}')
Laravel: in your database.php 'sockettimeoutms' => '1200000', add this and enjoy the ride

Meteor Method/Call Timed Out after 30 seconds

I am running into an issue with a very complex aggregate on a slow database setup that I have running.
Sometimes if it is complex enough it takes over 30 seconds, and I get:
Exception while invoking method 'methodName' MongoError: connection 3 to 'IP.IP.IP.IP' timed out
at Object.Future.wait
I know that it's not great to have something that takes over 30 seconds but that's what I'm working with. Is there any way to set the meteor call to wait for longer than 30 seconds before timing out?
I found the answer to this after digging in the problem a bit more. In my connection to my meteor app when I specify the url I needed to add a this to my mongo url:
socketTimeoutMS=XXXXX
My url now looks like:
MONGO_URL=mongodb://localhost:27017/dbName?socketTimeoutMS=45000 meteor
This thread got me in the right direction:
"Server x timed out" during MongoDB aggregation
I had also tried .noCursorTimeout() at the end of my aggregate on guess, that did nothing.

Resources