I am running into an issue with a very complex aggregate on a slow database setup that I have running.
Sometimes if it is complex enough it takes over 30 seconds, and I get:
Exception while invoking method 'methodName' MongoError: connection 3 to 'IP.IP.IP.IP' timed out
at Object.Future.wait
I know that it's not great to have something that takes over 30 seconds but that's what I'm working with. Is there any way to set the meteor call to wait for longer than 30 seconds before timing out?
I found the answer to this after digging in the problem a bit more. In my connection to my meteor app when I specify the url I needed to add a this to my mongo url:
socketTimeoutMS=XXXXX
My url now looks like:
MONGO_URL=mongodb://localhost:27017/dbName?socketTimeoutMS=45000 meteor
This thread got me in the right direction:
"Server x timed out" during MongoDB aggregation
I had also tried .noCursorTimeout() at the end of my aggregate on guess, that did nothing.
Related
I am providing a timeout of one second , however when the URL is down it is taking 120+ seconds for the response to come. Is there some variable or something that overrides the timeout in do:url-open ?
Update: I was calling the dp:url-open on request-transformation as well as on response-transformation. So the overriden timeout is 60 sec, adding both side it was becoming 120 sec.
Here's how I am calling this (I am storing the time before and after dp:url-open calls, and then returning them in the response):
Case 1: When the url is reachable I am getting a result like:
Case 2: When url is not reachable:
Update: FIXED: It seems the port that I was using was getting timed-out in the firewall first there it used to spend 1 minute. I was earlier trying to hit an application running on port 8077, later I changed that to 8088, And I started seeing the same timeout that I was passing.
The do:url-open() timeout only affects the operation done in the script but not the service itself. It depends on how you have built the solution but the time-out from the do:url-open() should be honored.
You can check this by setting logs to debug and adding a <xsl:message>Before url-open</xsl:message> and one after to see in the log if it is your url-open call or teh service that waits 120+ sec.
If it is the url-open you have most likely some error in the script and if it is the service that halts the response you need to return from the script (or throw an error depending on your needs) to halt the service.
You can set the time-out for the service itself or set a time-out in the User Agent for the specific URL you are calling as well.
Please note that the time-out will terminate the service after that time if you set it on service level so 1 sec. would not be recommended!
I am trying to play my QTP11 scripts in the UFT14 (trail) but for some reason .Exist doesn't wait for the given timeout. Rather it is waiting as per the Object sync timeout project settings if the object doesn't exist. Any reason why?
Like my project's object sync timeout is set at 60 seconds. And when I use something like If ErrorObject.Exist(10) Then ErrorObject.Close -- this should wait for 10 seconds only but rather UFT14 is waiting for full 60 seconds. Is it a bug or is there any extra setting which I have to apply in UFT14 for Exist to wait for the given timeout only?
Edit - On further inspection I found out that this is an issue with Java objects only. So might be a bug in Java addin. Can anyone verify or provide a workaround.
Edit - HP acknowledged that this is an issue. Here is the link if anyone is interested.
https://softwaresupport.hpe.com/group/softwaresupport/search-result/-/facetsearch/document/KM02764499
This is because of the default timeout in UFT.You can change that default timeout as below
Test Settings -> Run -> Object synchronization timeout
Change the "Object synchronization timeout" in seconds.
Or You can do this directly through vbscript code
Setting("DefaultTimeout") = 5000(This value is in milliseconds)
Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.
In short, we are sometimes seeing that a small number of Cloud Bigtable queries fail repeatedly (for 10s or even 100s of times in a row) with the error rpc error: code = 13 desc = "server closed the stream without sending trailers" until (usually) the query finally works.
In detail, our setup is as follows:
We are running a collection (< 10) of Go services on Google Compute Engine. Each service leases tasks from a pair of PULL task queues. Each task contains an ID of a bigtable row. The task handler executes the following query:
row, err := tbl.ReadRow(ctx, <my-row-id>,
bigtable.RowFilter(bigtable.ChainFilters(
bigtable.FamilyFilter(<my-column-family>),
bigtable.LatestNFilter(1))))
If the query fails then the task handler simply returns. Since we lease tasks with a lease time between 10 and 15 minutes, a little while later the lease will expire on that task, it will be lease again, and we'll retry. The tasks have a max retry of 1000 so they can be retried many times over a long period. In a small number of cases, a particular task will fail with the grpc error above. The task will typically fail with this same error every time it runs for hours or days on end, before (seemingly out of the blue) eventually succeeding (or the task runs out of retries and dies).
Since this often takes so long, it seems unrelated to server load. For example right now on a Sunday morning, these servers are very lightly loaded, and yet I see plenty of these errors when I tail the logs. From this answer, I had originally thought that this might be due to trying to query for a large amount of data, perhaps near the max limit that cloud bigtable will support. However I now see that this is not the case; I can find many examples where tasks that have failed many times finally succeed and report only a small amount of data (e.g. <1 MB) was retrieved.
What else should I be looking at here?
edit: From further testing I now know that this is completely machine (client) independent. If I tail the log on one of the task leasing machines, wait for a "server closed the stream without sending trailers" error, and then try a one-off ReadRow query to the same rowId from another, unrelated, totally unused machine, I get the same error repeatedly.
This error is typically caused by having more than 256MB of data in your reply.
However, there is currently a bug in our server side error handling code that allows some invalid characters in HTTP/2 trailers which is not allowed by the spec. This means that some error messages that have invalid characters will be seen as this kind of error. This should be fixed early next year.
I am currently trying to get search working in my Tridion 2011 installation. I read in another article that I should run the TcmReIndex.exe tool in the Tridion/bin folder to re-index all my sites. So I tried this and it failed with a message box giving the following details
Unable to get list of Publication items.
Unable to Intialize TDSE object.
The wait operation timed out
Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=21054; handshake=35;
The wait operation timed out
A database error occurred while executing Stored Procedure "EDA_TRUSTEES_GETTRUSTEEETOKEN"
I have four fairly large publications (100 000+ items in total) which I am trying to index.
Any ideas?
Whenever I get "Unable to Intialize TDSE object." errors, I typically write a small test script using VBScript, and try running it on the CMS server. Whilst this does not directly solve the problem, it often gives some insight into the issue by logging information in the event viewer. Try creating a test.vbs file as follows and running it:
Set tdse = CreateObject("TDS.TDSE")
tdse.initialize()
msgbox(tdse.User.Description)
Set tdse = Nothing
If it throws any errors, please let me know, and it may help us solve the problem. If it gives you a popup with your user description, then I am completely barking up the wrong tree.
I haven't come to anything conclusive but it seems like my issue may have been a temporary one as it just started working. I did increase all timeouts in Tridion MMC > Timeout Settings by 100 times their amounts but I suspect that this wasn't the issue, when it works the connection is almost instant.
If anyone else has this issue
Restart the computer the content manager is installed on, try again.
Wait an hour or two, try again.
Increase timeouts, try again.
I've run the process a few more times and it seems to be working correctly.