Related
I am using Python to download some data from bloomberg. It works most of the time, but sometimes it pops up a 'Time Out Issue`. And after that the response and request does not match anymore.
The code I use in the for loop is as follows:
result_IVM=con.bdh(option_name,'IVOL_MID',date_string,date_string,longdata=True)
volatility=result_IVM['value'].values[0]
When I set up the connection, I used following code:
con = pdblp.BCon(debug=True, port=8194, timeout=5000)
If I increase the timeout parameter (now is 5,000), will it help for this issue?
I'd suggest to increase the timeout to 5000 or even 10000 then test for few times. The default value of timeout is 500 milliseconds, which is small!
The TIMEOUT Event is triggered by the blpapi when no Event(s) arrives within milliseconds
The author of pdblp defines timeout as:
timeout: int Number of milliseconds before timeout occurs when
parsing response. See blp.Session.nextEvent() for more information.
Ref: https://github.com/matthewgilbert/pdblp/blob/master/pdblp/pdblp.py
I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.
Imagine that you click on an element using RSelenium on a page and would like to retrieve the results from the resulting page. How does one check to make sure that the resulting page has loaded? I can insert Sys.sleep() in between processing the page and clicking the element but this seems like a very ugly and slow way to do things.
Set ImplicitWaitTimeout and then search for an element on the page. From ?remoteDriver
setImplicitWaitTimeout(milliseconds = 10000)
Set the amount of time
the driver should wait when searching for elements. When searching for
a single element, the driver will poll the page until an element is
found or the timeout expires, whichever occurs first. When searching
for multiple elements, the driver should poll the page until at least
one element is found or the timeout expires, at which point it will
return an empty list. If this method is never called, the driver will
default to an implicit wait of 0ms.
In the RSelenium reference manual (http://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf), you will find the method setTimeout() for the remoteDriver class:
setTimeout(type = "page load", milliseconds = 10000)
Configure the amount of time that a particular type of operation can execute for before they are aborted and a |Timeout| error is returned to the client.
type: The type of operation to set the timeout for. Valid values are: "script" for script timeouts, "implicit" for modifying the implicit wait timeout and "page load" for setting a page load timeout. Defaults to "page load"
milliseconds: The amount of time, in milliseconds, that time-limited commands are permitted to run. Defaults to 10000 milliseconds.
This seems to suggests that remDr$setTimeout() after remDr$navigate("...") would actually wait for the page to load, or return a timeout error after 10 seconds.
you can also try out this code that waits for the browser to provide whether page loaded or not.
objExecutor = (JavascriptExecutor) objDriver;
if (!objExecutor.executeScript("return document.readyState").toString()
.equalsIgnoreCase("complete")){
Thread.sleep(1000);
}
You can simply put it in your base page so you wont need to write it down in every pageobjects. I have never tried it out with any AJAX enabled sites, but this might help you and your scenario dependency will also get away.
I am running a simple query to get data out of my database & display them. I'm getting an error that says Response Buffer Limit Exceeded.
Error is : Response object error 'ASP 0251 : 80004005'
Response Buffer Limit Exceeded
/abc/test_maintenanceDetail.asp, line 0
Execution of the ASP page caused the Response Buffer to exceed its configured limit.
I have also tried Response.flush in my loop and also use response.buffer = false in my top of the page, but still I am not getting any data.
My database contains 5600 records for that, Please give me some steps or code to solve the issue.
I know this is way late, but for anyone else who encounters this problem: If you are using a loop of some kind (in my case, a Do-While) to display the data, make sure that you are moving to the next record (in my case, a rs.MoveNext).
Here is what a Microsoft support page says about this:
https://support.microsoft.com/en-us/help/944886/error-message-when-you-use-the-response-binarywrite-method-in-iis-6-an.
But it’s easier in the GUI:
In Internet Information Services (IIS) Manager, click on ASP.
Change Behavior > Limits Properties > Response Buffering Limit from 4 MB to 64 MB.
Apply and restart.
The reason this is happening is because buffering is turned on by default, and IIS 6 cannot handle the large response.
In Classic ASP, at the top of your page, after <%#Language="VBScript"%> add:
<%Response.Buffer = False%>
In ASP.NET, you would add Buffer="False" to your Page directive.
For example:
<%#Page Language="C#" Buffer="False"%>
I faced the same kind of issue, my IIS version is 8.5. Increased the Response Buffering Limit under the ASP -> Limit Properties solved the issue.
In IIS 8.5, select your project, you can see the options in the right hand side. In that under the IIS, you can see the ASP option.
In the option window increase the Response Buffering Limit to 40194304 (approximately 40 MB) .
Navigate away from the option, in the right hand side top you can see the Actions menu, Select Apply. It solved my problem.
If you are not allowed to change the buffer limit at the server level, you will need to use the <%Response.Buffer = False%> method.
HOWEVER, if you are still getting this error and have a large table on the page, the culprit may be table itself. By design, some versions of Internet Explorer will buffer the entire content between before it is rendered to the page. So even if you are telling the page to not buffer the content, the table element may be buffered and causing this error.
Some alternate solutions may be to paginate the table results, but if you must display the entire table and it has thousands of rows, throw this line of code in the middle of the table generation loop: <% Response.Flush %>. For speed considerations, you may also want to consider adding a basic counter so that the flush only happens every 25 or 100 lines or so.
Drawbacks of not buffering the output:
slowdown of overall page load
tables and columns will adjust their widths as content is populated (table appears to wiggle)
Users will be able to click on links and interact with the page before it is fully loaded. So if you have some javascript at the bottom of the page, you may want to move it to the top to ensure it is loaded before some of your faster moving users click on things.
See this KB article for more information http://support.microsoft.com/kb/925764
Hope that helps.
Thank you so much!
<%Response.Buffer = False%> worked like a charm!
My asp/HTML table that was returning a blank page at about 2700 records. The following debugging lines helped expose the buffering problem: I replace the Do While loop as follows and played with my limit numbers to see what was happening:
Replace
Do While not rs.EOF
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
with
Do While reccount < 2500
if rs.EOF then recount = 2501
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
response.write "recount = " & recount
raise or lower the 2500 and 2501 to see if it is a buffer problem. for my record set, I could see that the blank page return, blank table, was happening at about 2700 records, good luck to all and thank you again for solving this problem! Such a simple great solution!
You can increase the limit as follows:
Stop IIS.
Locate the file %WinDir%\System32\Inetsrv\Metabase.xml
Modify the AspBufferingLimit value. The default value is 4194304, which is about 4 MB.
Changing it to 20MB (20971520).
Restart IIS.
One other answer to the same error message (this just fixed my problem) is that the System drive was low on disk space. Meaning about 700kb free. Deleting a lot of unused stuff on this really old server and then restarting IIS and the website (probably only IIS was necessary) cause the problem to disappear for me.
I'm sure the other answers are more useful for most people, but for a quick fix, just make sure that the System drive has some free space.
I rectified the error 'ASP 0251 : 80004005' Response Buffer Limit as follow:
To increase the buffering limit in IIS 6, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs SET w3svc/aspbufferinglimit LimitSize
Note LimitSize represents the buffering limit size in bytes. For example, the number 67108864 sets the buffering limit size to 64 MB.
To confirm that the buffer limit is set correctly, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs GET w3svc/aspbufferinglimit
refers to https://support.microsoft.com/en-us/kb/944886
If you are looking for the reason and don't want to fight the system settings, these are two major situations I faced:
You may have an infinite loop without next or recordest.movenext
Your text data is very large but you think it is not! The common reason for this situation is to copy-paste an Image from Microsoft word directly into the editor and so the server translates the image to data objects and saves it in your text field. This can easily occupies the database resources and causes buffer problem when you call the data again.
In my case i just have writing this line before rs.Open .....
Response.flush
rs.Open query, conn
It can be due to CursorTypeEnum also. My scenario was the initial value equal to CursorTypeEnum.adOpenStatic 3.
After changed to default, CursorTypeEnum.adOpenForwardOnly 0, it backs to normal.
I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)