Py2exe - PyOpenSSL Error: maximum recursion depth exceeded - python-3.4

I am a bit confused:
I am using Python 3.4 and py2exe compiling a program to a standalone which is used by another person. I installed the PyOpenSSL Package via pip but I didn't use it in the program. When I tried to compile the program after the installation of the PyOpenSSL I get the maximum recursion depth exceeded in comparison error. As soon as I unistalled the PyOpenSSL package the error was gone.
How can I fix this?
I know that Python 3.4 is outdated. I will move to Python 3.6 soon.

When we go into a recursion, there is a risk of stack overflow and the Cpython working under the hood does not take it upon itself to optimize tail recursion, so if you go too deep, you will move closer towards a stack overflow. Generally different Cpython/python flavors have different recursion permit depth, So when you use PyOpenSSL, it changes(Overrides) the sys.setrecursionlimit to an even more lower value, hence the python stack you can grow becomes even more constrained.
You can read a bit more and also how to change it (Not recommended) here. https://docs.python.org/3/library/sys.html#sys.setrecursionlimit
And it would be better if you replace it with an iterative version if possible, Python stackframe tend to grow very huge which is no fun for the memory management routines.
Hope that helps.

Related

Can I check OpenCL kernel syntax at compilation time?

I'm working on some OpenCL code within a larger project. The code only gets compiled at run-time - but I don't want to deploy a version and start it up just for that. Is there some way for me to have the syntax of those kernels checked (even without consider), or even compile them, at least under some restrictions, to make it easier to catch errors earlier?
I will be targeting AMD and/or NVIDIA GPUs.
The type of program you are looking for is an "offline compiler" for OpenCL kernels - knowing this will hopefully help with your search. They exist for many OpenCL implementations, you should check availability for the specific implementation you are using; otherwise, a quick web search suggests there are some generic open source ones which may or may not fit the bill for you.
If your build machine is also your deployment machine (i.e. your target OpenCL implementation is available on your build machine), you can of course also put together a very basic offline compiler yourself by simply wrapping clBuildProgram() and friends in a basic command line utility.

Is there R command(s) making Keras Tensorflow-GPU to run on CPU?

I'm running Keras in R and using Tensorflow-GPU backend. Is it possible to force Keras to run on CPU without re-installing the backend?
Let me give you 2 answers.
Answer #1 (normal answer)
No, unfortunately not. For keras CPU and GPU are 2 different versions, from which you select at install time.
It seems you remember that you selected GPU at install time. I guess you're hoping that you were only setting a minor option, not selecting a version of the program. Unfortunately, you were selecting the version of keras to install.
Answer #2 (ok, maybe you can "trick" keras)
It seems you can use environment variable values to trick keras into thinking that your CPU is your GPU.
This seems like it may have unexpected results, but it seemed to work for these Python users.
I wouldn't worry about the fact that they are using Python. They are just using their language to set environment variables. So you can do the same in R
or directly within your OS.

How to find a memory leak in Meteor App

I made a little app and deployed it into a Ubuntu server using Meteor Up.
There are very few users each day (<10), but after few days a lot of the memory of the server is used.
So I think that there is a memory leak somewhere in my code.
How to find it ?
Thanks a lot !
We actually had a memory leak the last days and it was relatively easy to find using a package called heapdump, you can find it here: https://www.npmjs.com/package/heapdump
It is not made for meteor in specific, but for nodejs. Just read through the README carefully to install it. Afterwards find a good moment to get the first heapdump by running kill -USR2 <pid_of_meteor_app> on the server. A good moment is when there is not much going on on the server but enough so that the memory is leaking.
Afer a while, when you recognized a good amount of memory growth without a logical explanation: go make another heapdump and download both.
Hit F12 to open up you webdev console on your browser (Chrome, Firefox, Edge,...) and got to Memory there. Import both heapdumps after.
Now you need to find what changed between those two heapdumps, what actually helped me there was this article to understand how to do that: https://www.useanvil.com/blog/engineering/isolating-memory-leak-in-node/
Remember that you are most probably looking for memory reservations of the same size, sometimes just tiny amounts of kb as in our case, but hundreds of thousands of them. So sorting by space is a good idea.
In our case it was an outdated package called tslib which reserverd all the memory after a day or so. We were on 2.3.1, so we went to https://github.com/microsoft/tslib/releases/tag/2.4.0 and read there:
This release includes the __classPrivateFieldIn helper as well as an
update to __createBinding to reduce indirection between multiple
re-exports.
We updated the package, which was a dependency of another package and that fixed it.
Kadira, Monti APM and whatever is often absolutely useless in such cases, you cannot really track down the source more often than not.
There is a kadira package to check your app. have a look https://kadira.io/

Julia compiles the script every time?

Julia language compiles the script every time, can't we compile binaries with julia instead?
I tried a small helloworld script with println function it took like 2,3 seconds for julia to show the output! It would be better if we can make binaries instead of compiling every time
Update: There have been some changes in Julia, since I asked this question. Though I'm not following the updates for julia anymore, since I've asked this question and if you're looking for something similar, look into the below answers and comments by people who are following julia.
Also, its good to know that now it takes around 150ms to load a script.
Keno's answer is spot on, but maybe I can give a little more detail on what's going on and what we're planning to do about it.
Currently there is only an LLVM JIT mode:
There's a very trivial interpreter for some simple top-level statements.
All other code is jitted into machine code before execution. The code is aggressively specialized using the run-time types of the values that the code is being applied to, propagated through the program using dynamic type inference.
This is how Julia gets good performance even when code is written without type annotations: if you call f(1) you get code specialized for Int64 — the type of 1 on 64-bit systems; if you call f(1.0) you get a newly jitted version that is specialized for Float64 — the type of 1.0 on all systems. Since each compiled version of the function knows what types it will be getting, it can run at C-like speed. You can sabotage this by writing and using "type-unstable" functions whose return type depends on run-time data, rather than just types, but we've taken great care not to do that in designing the core language and standard library.
Most of Julia is written in itself, then parsed, type-inferred and jitted, so bootstrapping the entire system from scratch takes some 15-20 seconds. To make it faster, we have a staged system where we parse, type-infer, and then cache a serialized version of the type-inferred AST in the file sys.ji. This file is then loaded and used to run the system when you run julia. No LLVM code or machine code is cached in sys.ji, however, so all the LLVM jitting still needs to be done every time julia starts up, which therefore takes about 2 seconds.
This 2-second startup delay is quite annoying and we have a plan for fixing it. The basic plan is to be able to compile whole Julia programs to binaries: either executables that can be run or .so/.dylib shared libraries that can be called from other programs as though they were simply shared C libraries. The startup time for a binary will be like any other C program, so the 2-second startup delay will vanish.
Addendum 1: Since November 2013, the development version of Julia no longer has a 2-second startup delay since it precompiles the standard library as binary code. The startup time is still 10x slower than Python and Ruby, so there's room for improvement, but it's pretty fast. The next step will be to allow precompilation of packages and scripts so that those can startup just as fast as Julia itself already does.
Addendum 2: Since June 2015, the development version of Julia precompiles many packages automatically, allowing them to load quickly. The next step is static compilation of entire Julia programs.
At the moment Julia JIT compiles its entire standard library on startup. We are aware of the situation and are currently working on caching the LLVM JIT output to remedy the situation, but until then, there's no way around it (except for using the REPL).

R 2.14 byte compile - why not?

Why wouldn't I byte-compile all the packages I install? Is there some consequence of byte-compile making it a decision to think about?
One negative is that you can't debug byte compiled code. On the flip side, once the
code is production ready, in theory you wouldn't need that (and you could reinstall it w/o byte compilation if you needed to)
In R version 2.14, a major downside of byte-compiling was that it could slow down certain functions. Another two downsides were increased package size and installation.
For the current version of R (3.3.X), I have yet to find a downside for byte-compiling.
Currently the development version of R already byte-compiles all packages by default, so one does not have to turn byte-compilation on in the DESCRIPTION file. A related answer mentions overheads of byte-compilation - it is possible but rare that byte-compilation would harm performance (it can happen when code is loaded that will never be used - the JIT won't compile it, but the loader still loads it; hopefully this can be addressed in the future).
browser() and debugging with the byte-compiled code works, from the user perspective, the same way as with non-compiled code. Internally the debugger runs on the AST of the program (so bypassing the byte-code), but this is not visible to the user.

Resources