R Package IBrokers placeOrder() function fails - r

I'm using the package: IBrokers. It works well for me when I request historical data. Also the call to reqAccountUpdates() works well.
I am having problems with this script:
# myscript.r
.libPaths("rpackages")
library(IBrokers)
tws2 = twsConnect(2)
print('Attempting BUY')
mytkr = twsFuture("ES","GLOBEX","201412")
myorderid = sample(1001:3001, 1)
IBrokers:::.placeOrder(tws2, mytkr, twsOrder(myorderid, "BUY", "1", "MKT"))
twsDisconnect(tws2)
Sometimes the above script works okay. Usually though it fails. When it fails, it seems to connect okay.
Then I see this in my TWS console:
03:47:45:581 JTS-EServerSocket-290: [2:47:71:1:0:0:0:ERR] Message type -1. Socket I/O error -
03:47:45:581 JTS-EServerSocket-290: Anticipated error
jextend.d: Socket I/O error -
at jextend.sc.b(sc.java:364)
at jextend.ch.sb(ch.java:1534)
at jextend.ch.run(ch.java:1390)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at jextend.xh.d(xh.java:45)
at jextend.sc.c(sc.java:579)
at jextend.sc.r(sc.java:227)
at jextend.af.a(af.java:232)
at jextend.sc.f(sc.java:650)
at jextend.pd.a(pd.java:822)
at jextend.sc.b(sc.java:358)
... 3 more
03:47:45:583 JTS-EServerSocket-290: [2:47:71:1:0:0:0:ERR] Socket connection for client{2} has closed.
03:47:45:583 JTS-EWriter14-291: [2:47:71:1:0:0:0:ERR] Unable write to socket client{2} -
03:47:45:584 JTS-EServerSocketNotifier-288: Terminating
Can you offer any ideas on how you would wrestle with this issue?
One other piece of info:
I think the call to reqIds() may be necessary. Sometimes reqIds() would return an id not high enough. Then, I'd use it and placeOrder() would fail. So, I call reqIds() but then use Sys.time() to give me an id which is larger than the last ID I used.
Another problem may have been some code-text I copied out of a PowerPoint. Some of the code-characters may have been corrupt.

The main problem was orderid.
I need to be careful how I generate orderid.
Also I may have moused in bad characters from a powerpoint preso which had an example.
I posted some code which works in a comment in this thread.
Dan

Related

Adding a column to MongoDB from R via mongolite gives persistent error

I want to add a column to a MongoDB collection via R. The collection has tabular format and is already relatively big (14000000 entries, 140 columns).
The function I am currently using is
function (collection, name, value)
{
mongolite::mongo(collection)$update("{}", paste0("{\"$set\":{\"",
name, "\": ", value, "}}"), multiple = TRUE)
invisible(NULL)
}
It does work so far. (It takes about 5-10 Minutes, which is ok. Although, it would be nice if the speed could be improved somehow).
However, it also gives me persistently the following error that interrupts the execution of the rest of the script.
The error message reads:
Error: Failed to send "update" command with database "test": Failed to read 4 bytes: socket error or timeout
Any help on resolving this error would be appreciated. (If there are ways to improve the performance of the update itself I'd also be more than happy for any advices.)
the default socket timeout is 5 minutes.
You can override the default by setting sockettimeoutms directly in your connection URI:
mongoURI <- paste0("mongodb://", user,":",pass, "#", mongoHost, ":", mongoPort,"/",db,"?sockettimeoutms=<something large enough in milliseconds>")
mcon <- mongo(mongoCollection, url=mongoURI)
mcon$update(...)

Slackr: x Problem with `id` - Cannot send messages

I am not an admin so I can't change the scopes. I can send slackr_bot messages to a channel I set up in the creation of the app in UI but doing the below does not work. Has anyone found a solution to this?
I created a txt file called: test.txt
Within that txt file it looks like this:
api_token: xxxxxxxxxxxx
channel: #channel_name
username: myusername
incoming_webhook_url: https://hooks.slack.com/services/xxxxxxxxxxx/xxxxxxxxxxxxx
Then I want to simply send a message but eventually I would like to run the function
ggslackr(qplot(mpg, wt, data=mtcars))
slackr_setup(config_file = "test.txt")
my_message <- paste("I'm sending a Slack message at", Sys.time(), "from my R script.")
slackr_msg(my_message, channel = "#channel_name", as_user=F)
Here is the error message:
Error: Join columns must be present in data.
x Problem with `id`.
Run `rlang::last_error()` to see where the error occurred.
In addition: Warning message:
In structure(vars, groups = group_vars, class = c("dplyr_sel_vars", :
Calling 'structure(NULL, *)' is deprecated, as NULL cannot have attributes.
Consider 'structure(list(), *)' instead.
Edit #2:
Okay, I learned some things regarding packages. If I had to do this over, I'd have gone to their github repo and read the issue tracker.
The reason is that it appears that slackr has a few issues related to changes in Slack's API.
And also since there has been a large updating of R (version 4.x) a lot of packages got broken.
My sense is that our issue is with a line of code inside a slackr function (slackr_util.r--iirc) that calls a dplyr join that is looking for a particular id that does not exist.
So, I'm going to watch the issue tracker and see what comes of it.
Edit: Try slackr_bot(my_message,channel = "#general")
worked as advertised!
But ggslackr continues to fail.
I'm having the same issue. I've found in another thread a debugging start:
`rlang::last_error()`
When I run that,
Backtrace:
1. slackr::slackr_msg(my_message, channel = "#general")
5. slackr::slackr_chtrans(channel)
6. slackr::slackr_ims(api_token)
8. dplyr:::left_join.data.frame(users, ims, by = "id", copy = TRUE)
9. dplyr:::join_mutate(...)
10. dplyr:::join_cols(...)
11. dplyr:::standardise_join_by(by, x_names = x_names, y_names = y_names)
12. dplyr:::check_join_vars(by$y, y_names)
So, step 8 there is a join effort by id, which I suppose this implies that 'id' is missing.
yet, if I run from github issue tracker : slackr::slackrSetup(echo=TRUE) I get the following:
{
"SLACK_CHANNEL": ["#general"],
"SLACK_USERNAME": ["slackr_brian"],
"SLACK_ICON_EMOJI": ["NA"],
"SLACK_INCOMING_URL_PREFIX": ["https://hooks.xxxxxxx"],
"SLACK_API_TOKEN": ["token secret"]
}
I'm not sure where to go from here as the issue tracker conversation makes mention of confirming webhooks going to the correct channel and becomes very user specific.
So, that's as far as I have gotten.

First token could not be read or is not the keyword 'FoamFile' in OpenFOAM

I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.

Qt error is printed on the console; how to see where it originates from?

I'm getting this on the console in a QML app:
QFont::setPointSizeF: Point size <= 0 (0.000000), must be greater than 0
The app is not crashing so I can't use the debugger to get a backtrace for the exception. How do I see where the error originates from?
If you know the function the warning occurs in (in this case, QFont::setPointSizeF()), you can put a breakpoint there. Following the stack trace will lead you to the code that calls that function.
If the warning doesn't include the name of the function and you have the source code available, use git grep with part of the warning to get an idea of where it comes from. This approach can be a bit of trial and error, as the code may span more than one line, etc, and so you might have to try different parts of the string.
If the warning doesn't include the name of the function, you don't have the source code available and/or you don't like the previous approach, use the QT_MESSAGE_PATTERN environment variable:
QT_MESSAGE_PATTERN="%{function}: %{message}"
For the full list of variables at your disposal, see the qSetMessagePattern() docs:
%{appname} - QCoreApplication::applicationName()
%{category} - Logging category
%{file} - Path to source file
%{function} - Function
%{line} - Line in source file
%{message} - The actual message
%{pid} - QCoreApplication::applicationPid()
%{threadid} - The system-wide ID of current thread (if it can be obtained)
%{qthreadptr} - A pointer to the current QThread (result of QThread::currentThread())
%{type} - "debug", "warning", "critical" or "fatal"
%{time process} - time of the message, in seconds since the process started (the token "process" is literal)
%{time boot} - the time of the message, in seconds since the system boot if that can be determined (the token "boot" is literal). If the time since boot could not be obtained, the output is indeterminate (see QElapsedTimer::msecsSinceReference()).
%{time [format]} - system time when the message occurred, formatted by passing the format to QDateTime::toString(). If the format is not specified, the format of Qt::ISODate is used.
%{backtrace [depth=N] [separator="..."]} - A backtrace with the number of frames specified by the optional depth parameter (defaults to 5), and separated by the optional separator parameter (defaults to "|"). This expansion is available only on some platforms (currently only platfoms using glibc). Names are only known for exported functions. If you want to see the name of every function in your application, use QMAKE_LFLAGS += -rdynamic. When reading backtraces, take into account that frames might be missing due to inlining or tail call optimization.
On an unrelated note, the %{time [format]} placeholder is quite useful to quickly "profile" code by qDebug()ing before and after it.
I think you can use qInstallMessageHandler (Qt5) or qInstallMsgHandler (Qt4) to specify a callback which will intercept all qDebug() / qInfo() / etc. messages (example code is in the link). Then you can just add a breakpoint in this callback function and get a nice callstack.
Aside from the obvious, searching your code for calls to setPointSize[F], you can try the following depending on your environment (which you didn't disclose):
If you have the debugging symbols of the Qt libs installed and are using a decent debugger, you can set a conditional breakpoint on the first line in QFont::setPointSizeF() with the condition set to pointSize <= 0. Even if conditional breakpoints don't work you should still be able to set one and step through every call until you've found the culprit.
On Linux there's the tool ltrace which displays all calls of a binary into shared libs, and I suppose there's something similar in the M$ VS toolbox. You can grep the output for calls to setPointSize directly, but of course this won't work for calls within the lib itself (which I guess could be the case when it handles the QML internally).

Out-of-memory error in parfor: kill the slave, not the master

When an out-of-memory error is raised in a parfor, is there any way to kill only one Matlab slave to free some memory instead of having the entire script terminate?
Here is what happens by default when an out-of-memory error occurs in a parfor: the script terminated, as shown in the screenshot below.
I wish there was a way to just kill one slave (i.e. removing a worker from parpool) or stop using it to release as much memory as possible from it:
If you get a out of memory in the master process there is no chance to fix this. For out of memory on the slave, this should do it:
The simple idea of the code: Restart the parfor again and again with the missing data until you get all results. If one iteration fails, a flag (file) is written which let's all iterations throw an error as soon as the first error occurred. This way we get "out of the loop" without wasting time producing other out of memory.
%Your intended iterator
iterator=1:10;
%flags which indicate what succeeded
succeeded=false(size(iterator));
%result array
result=nan(size(iterator));
FLAG='ANY_WORKER_CRASHED';
while ~all(succeeded)
fprintf('Another try\n')
%determine which iterations should be done
todo=iterator(~succeeded);
%initialize array for the remaining results
partresult=nan(size(todo));
%initialize flags which indicate which iterations succeeded (we can not
%throw erros, it throws aray results)
partsucceeded=false(size(todo));
%flag indicates that any worker crashed. Have to use file based
%solution, don't know a better one. #'
delete(FLAG);
try
parfor falseindex=1:sum(~succeeded)
realindex=todo(falseindex);
try
% The flag is used to let all other workers jump out of the
% loop as soon as one calculation has crashed.
if exist(FLAG,'file')
error('some other worker crashed');
end
% insert your code here
%dummy code which randomly trowsexpection
if rand<.5
error('hit out of memory')
end
partresult(falseindex)=realindex*2
% End of user code
partsucceeded(falseindex)=true;
fprintf('trying to run %d and succeeded\n',realindex)
catch ME
% catch errors within workers to preserve work
partresult(falseindex)=nan
partsucceeded(falseindex)=false;
fprintf('trying to run %d but it failed\n',realindex)
fclose(fopen(FLAG,'w'));
end
end
catch
%reduce poolsize by 1
newsize = matlabpool('size')-1;
matlabpool close
matlabpool(newsize)
end
%put the result of the current iteration into the full result
result(~succeeded)=partresult;
succeeded(~succeeded)=partsucceeded;
end
After quite bit of research, and a lot of trial and error, I think I may have a decent, compact answer. What you're going to do is:
Declare some max memory value. You can set it dynamically using the MATLAB function memory, but I like to set it directly.
Call memory inside your parfor loop, which returns the memory information for that particular worker.
If the memory used by the worker exceeds the threshold, cancel the task that worker was working on. Now, here it get's a bit tricky. Depending on the way you're using parfor, you'll either need to delete or cancel either the task or worker. I've verified that it works with the code below when there is one task per worker, on a remote cluster.
Insert the following code at the beginning of your parfor contents. Tweak as necessary.
memLimit = 280000000; %// This doesn't have to be in parfor. Everything else does.
memData = memory;
if memData.MemUsedMATLAB > memLimit
task = getCurrentTask();
cancel(task);
end
Enjoy! (Fun question, by the way.)
One other option to consider is that since R2013b, you can open a parallel pool with 'SpmdEnabled' set to false - this allows MATLAB worker processes to die without the whole pool being shut down - see the doc here http://www.mathworks.co.uk/help/distcomp/parpool.html . Of course, you still need to arrange somehow to shutdown the workers.

Resources