I am using Python to download some data from bloomberg. It works most of the time, but sometimes it pops up a 'Time Out Issue`. And after that the response and request does not match anymore.
The code I use in the for loop is as follows:
result_IVM=con.bdh(option_name,'IVOL_MID',date_string,date_string,longdata=True)
volatility=result_IVM['value'].values[0]
When I set up the connection, I used following code:
con = pdblp.BCon(debug=True, port=8194, timeout=5000)
If I increase the timeout parameter (now is 5,000), will it help for this issue?
I'd suggest to increase the timeout to 5000 or even 10000 then test for few times. The default value of timeout is 500 milliseconds, which is small!
The TIMEOUT Event is triggered by the blpapi when no Event(s) arrives within milliseconds
The author of pdblp defines timeout as:
timeout: int Number of milliseconds before timeout occurs when
parsing response. See blp.Session.nextEvent() for more information.
Ref: https://github.com/matthewgilbert/pdblp/blob/master/pdblp/pdblp.py
Related
In a mariadb table with tokuDb engine; I am ecountering the below error - either on a delete statement; whilst there is a background insert load, and vice versa.
Lock wait timeout exceeded; try restarting transaction
Does tokuDb user a setting that can be updated to determine how long it waits before it timesout a statement?
I couldn't find the answer in tokuDb documents. The maria varaible is still at its default value: 'lock_wait_timeout', '31536000' -- but my timeout is coming back in quite a bit less than a year. The timeouts are coming during a load test; and I haven't spotted a time value in the error - but it feels like a few seconds; to minutes at the most before the timeout is thrown.
Thanks,
Brent
TokuDB has its own timeout variable, tokudb_lock_timeout, it is measured in milliseconds and has the default value 4000 (4 seconds), which fits your observations. It can be modified both on the session and global levels, and can also be configured in the .cnf file.
Remember that when you set a global value for a variable which has both scopes, it only affects future sessions (connections), but not the existing ones.
-- for the current session
SET SESSION tokudb_lock_timeout = 60000;
-- for future sessions
SET GLOBAL tokudb_lock_timeout = 60000;
I wonder if there is any callback which fires, when the session expires (I'm using Simplelogin with $authWithPassword()). I searched already with google and stumbled upon this: https://groups.google.com/forum/#!topic/firebase-talk/btaE-hCVQdk
But I don't understand how the callback of the auth-Method listens to the "Session expired" since its getting executed only once (when a user logs in). Or is there actually an event listener on its callbacks?`
I tried testing the login by using the options paramters with expires: ((new Date()).getTime() + 1000) / 1000 (it says, it needs a timestamp in seconds not milliseconds) but I don't get a result.
Any help is appreciated.
My solution for this:(in pseudo-code-steps, can help with full javascript).
1. get timeOffset (server/client) by doing:
1.1. on login ref.set() clientTime (Date.now()) and serverTime (Firebase.ServerValue.TIMESTAMP) in firebase-object (i.e. online-list)
1.2. on success read both and get timeOffset in ms
2. window.setTimeout() with your controlled logout-function (i.e. unauth()) and following timeout-value:
2.2 microseconds to timeout: with login via auth() you get authData.expires, use this to calc expire-timeout-value:
authData.expires*1000 - Date.now()+that.serverTimeOffset - 2000
use *1000 because authData.expires comes in seconds.
user -2000 because you have to be faster with unauth() than firebase disconnects you :-)
I'm very pleased with this solution. It works perfect for my multiplayer-browser game.
Imagine that you click on an element using RSelenium on a page and would like to retrieve the results from the resulting page. How does one check to make sure that the resulting page has loaded? I can insert Sys.sleep() in between processing the page and clicking the element but this seems like a very ugly and slow way to do things.
Set ImplicitWaitTimeout and then search for an element on the page. From ?remoteDriver
setImplicitWaitTimeout(milliseconds = 10000)
Set the amount of time
the driver should wait when searching for elements. When searching for
a single element, the driver will poll the page until an element is
found or the timeout expires, whichever occurs first. When searching
for multiple elements, the driver should poll the page until at least
one element is found or the timeout expires, at which point it will
return an empty list. If this method is never called, the driver will
default to an implicit wait of 0ms.
In the RSelenium reference manual (http://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf), you will find the method setTimeout() for the remoteDriver class:
setTimeout(type = "page load", milliseconds = 10000)
Configure the amount of time that a particular type of operation can execute for before they are aborted and a |Timeout| error is returned to the client.
type: The type of operation to set the timeout for. Valid values are: "script" for script timeouts, "implicit" for modifying the implicit wait timeout and "page load" for setting a page load timeout. Defaults to "page load"
milliseconds: The amount of time, in milliseconds, that time-limited commands are permitted to run. Defaults to 10000 milliseconds.
This seems to suggests that remDr$setTimeout() after remDr$navigate("...") would actually wait for the page to load, or return a timeout error after 10 seconds.
you can also try out this code that waits for the browser to provide whether page loaded or not.
objExecutor = (JavascriptExecutor) objDriver;
if (!objExecutor.executeScript("return document.readyState").toString()
.equalsIgnoreCase("complete")){
Thread.sleep(1000);
}
You can simply put it in your base page so you wont need to write it down in every pageobjects. I have never tried it out with any AJAX enabled sites, but this might help you and your scenario dependency will also get away.
Consider,
A query is taking more than a minute to retrieve the data (Due to larger volume of data) from the database.
I know that, we can set "timeout" attribute in the select tag (For a single query alone) or "defaultStatementTimeout" attribute in settings tag (SqlMapConfig.xml - For all the query) to forcibly terminate the query in execution.
<select id='uniqueName' parameterClass='java.util.Map' resultClass = "java.lang.String" timeout="60">
or
<settings useStatementNamespaces="false" defaultStatementTimeout="60"/>
By doing the above configuration, IBatis will throw "User cancelled request" error and terminates the execution.
Do we have any other way to terminate the execution?
My Scenario is:
When user requests for 3 years data, it takes more than a minute to fetch data from the database.
In the meantime, when the user requests for 1 day's data or sends a "cancel" request, I have to forcibly terminate the previous execution (3 years data retrieval) because it is affecting performance even with limited number of users.
NOTE
I didn't used any of the setting above.
Please provide me solution for this. Thanks in advance.
You can set a resource limit by modifying the Profile associated with the database user.
guys!
I'm developing an online auction with time limit.
The ending time period is only for one opened auction.
After logging into the site I show the time left for the open auction. The time is calculated in this way:
EndDateTime = Date and Time of end of auction;
DateTime.Now() = current Date and Time
timeLeft= (EndDateTime - DateTime.Now()).Seconds().
In javascript, I update the time left by:
timeLeft=timeLeft-1
The problem is that when I login from different browsers at the same time the browsers show a different count down.
Help me, please!
I guess there will always be differences of a few seconds because of the server processing time and the time needed to download the page.
The best way would be to actually send the end time to the browser and calculate the time remaining in javascript. That way the times should be the same (on the same machine of course).
Roman,
I had a little look at eBay (they know a thing or two about this stuff :)) and noticed that once the item is inside the last 90 seconds, a GET request gets fired every 2 seconds to update the variables in the javascript via a json response. you can look at this inside firebug/fiddler to see what it does.
here is an example of the json it pulls down:
{
"ViewItemLiteResponse":{
"Item":[
{
"IsRefreshPage":false,
"ViewerItemRelation":"NONE",
"EndDate":{
"Time":"12:38:48 BST",
"Date":"01 Oct, 2010"
},
"LastModifiedDate":1285932821000,
"CurrentPrice":{
"CleanAmount":"23.00",
"Amount":23,
"MoneyStandard":"£23.00",
"CurrencyCode":"GBP"
},
"IsEnded":false,
"AccessedDate":1285933031000,
"BidCount":4,
"MinimumToBid":{
"CleanAmount":"24.00",
"Amount":24,
"MoneyStandard":"£24.00",
"CurrencyCode":"GBP"
},
"TimeLeft":{
"SecondsLeft":37,
"MinutesLeft":1,
"HoursLeft":0,
"DaysLeft":0
},
"Id":160485015499,
"IsFinalized":false,
"ViewerItemRelationId":0,
"IsAutoRefreshEnabled":true
}
]
}
}
You could do something similar inside your code.
[edit] - on further looking at the eBay code, altho it only runs the intensive GET requests in the last 90 seconds, the same json as above is added when the page is initially loaded as well. Then, at 3 mins or so, the GET request is run every 10 seconds. therefore i assume the same javascript is run against that structure whether it be >90 seconds or not.
This may be a problem with javascript loading at different speeds,
or the setInterval will trigger at slightly different times depending on the loop
i would look into those two