How to find a memory leak in Meteor App - meteor

I made a little app and deployed it into a Ubuntu server using Meteor Up.
There are very few users each day (<10), but after few days a lot of the memory of the server is used.
So I think that there is a memory leak somewhere in my code.
How to find it ?
Thanks a lot !

We actually had a memory leak the last days and it was relatively easy to find using a package called heapdump, you can find it here: https://www.npmjs.com/package/heapdump
It is not made for meteor in specific, but for nodejs. Just read through the README carefully to install it. Afterwards find a good moment to get the first heapdump by running kill -USR2 <pid_of_meteor_app> on the server. A good moment is when there is not much going on on the server but enough so that the memory is leaking.
Afer a while, when you recognized a good amount of memory growth without a logical explanation: go make another heapdump and download both.
Hit F12 to open up you webdev console on your browser (Chrome, Firefox, Edge,...) and got to Memory there. Import both heapdumps after.
Now you need to find what changed between those two heapdumps, what actually helped me there was this article to understand how to do that: https://www.useanvil.com/blog/engineering/isolating-memory-leak-in-node/
Remember that you are most probably looking for memory reservations of the same size, sometimes just tiny amounts of kb as in our case, but hundreds of thousands of them. So sorting by space is a good idea.
In our case it was an outdated package called tslib which reserverd all the memory after a day or so. We were on 2.3.1, so we went to https://github.com/microsoft/tslib/releases/tag/2.4.0 and read there:
This release includes the __classPrivateFieldIn helper as well as an
update to __createBinding to reduce indirection between multiple
re-exports.
We updated the package, which was a dependency of another package and that fixed it.
Kadira, Monti APM and whatever is often absolutely useless in such cases, you cannot really track down the source more often than not.

There is a kadira package to check your app. have a look https://kadira.io/

Related

R and Rstudio Docker vs Binder

My problem is that I can't use R-studio at my work place as the IT does not support it . I want to use R and R-studio that installed on my personnel laptop on my company laptop ( using a modern browser which is behind firewall ) . Some of the options I am thinking of two two things
should I need to build a docker for R and R-studio (I see base images are already available) , I am mostly interested in basic R , Dplyr (haven ,xporter, and Reticulate ) packages .
Should I have to use a binder . I am not technical person and my programming skills are very limited can any one suggest me way .
What exactly are the difference between using Docker option vs Binder ?
I know I can use R-Studio online and get my work done but with the new paid account I am running out of project hours and very slow sometimes . Thanks in advance
Here are some examples beyond the modern RStudio MyBinder example:
https://github.com/fomightez/pythonista_skewedf
https://github.com/fomightez/r_phylogenetics_worshop
https://github.com/fomightez/chapter7/tree/master/binder
The modern RStudio MyBinder example has been set as a template on GitHub so you can use
The first one is for a special use of a package not on conda. And I started that one from square one.
The other two were converted from content by others to aid in making them Binder-ready.
You essentially list everything you need from conda in the environment.yml along with the appropriate channels. If you need special stuff not on conda, you need the other configuration files included there.
Getting everything working can take some iterations on adding things, letting the image get built, and testing your libraries are available. Although you seem to think your situation is not overly complex.
The binder launch badges you see are just images where you modify the URL to point the MyBinder federation site at your repository. Look at the URL and you should see the pattern where you put studio at the end of the URL pointing at your repo. The form at MyBinder.org site can help with this; however, most often it is easier to just adapt a working launch badge's code copied from elsewhere. The form isn't set up at this time for making the URLs for launching to RStudio.
Download anything useful your create in a running session. The sessions timeout after 10 minutes, although RStudio usually keeps them active.
Lack of Persistence and limited memory, storage, & power can be drawbacks. The inherent reproducibility and portability are advantages.
MyBinder.org doesn't work with private repos. If you have code you don't want to share, you can upload it to the temporary session, using the repo for specifying the environment. You could host a private binderhub that does allow the use of private git repositories; however, that is probably overkill for your use case and exceed your ability level at this time.
GitHub isn't the only place to host repositories that can be pointed at the MyBinder system. If you go to the MyBinder.org page and click where it says 'GitHub' on the left side of the top line of the form, you can see a list of the sources at which you can host a repository and point the system to build an image and launch a container with that specified image.
Building the image from a source repository takes some minutes the first time. Once the image is built though on the service, launch is typically less than 30 seconds. Each time you make a change on the source repo, a build is necessary. Some changes don't cause the new build to be as long as the initial one as some optimizing is done to only build what is necessary after a change. Keep in mind there are several members of the federation around the workd and if traffic on the internet gets sent to where the built image isn't yet available, it will be built from scratch again first.
The Holepunch project is out there to offer some help for users working in the R ecosystem; however, with the R-Conda system that is now integrated into MyBinder it is pretty much as easy to do it the way I described. Last I knew, the Holepunch route makes a Dockerfile that isn't as easy to troubleshoot as using the current the R-Conda system route. Dockerfiles are essentially a last ditch configuration file that MyBinder can handle. The reason being the other configuration files are much easier and don't require knowing Dockerfile syntax. MyBinder aims to offer the ability to take advantage of Docker offering containers with a specified environment without users needing to know anything about Docker.
There is a Binder Help category for posting to get help at the Jupyter Discourse Forum. Some other examples of posts already there may help you troubleshoot.
Notice of a common pitfall
Most of the the configuration files for making a repository Binder-ready are simply text and can be edited right in the GitHub browser interface, without need to git or even cloning the repo locally.
Last I knew, there are two exceptions to this. The postBuild and start configuration files have settings that allow them to be run as scripts and these get altered in a way they no longer work if you edit them via the GitHub browser interface. (This was my experience when last I tried. Your mileage may vary or things may have changed now.) To edit those, you have to have git available on a system you have and pull one from some other source. Then edit that on your machine that has git working & add it your repo and push it back up from your local computer.
(If this is a problem, you can post in the Jupyter Discourse Forum Binder help category and you and I could coordinate where I fork and edit those files in your repo to your specifications and then make a pull request to update your source of the fork with those changes.)
If you are using Jupyter notebooks extensively then it may make sense to use Binder
But if you simply want to use R and Rstudio, then all you need is docker. A good resource is
https://github.com/rocker-org/rocker

Anyone able to make numtel:pg package working with Meteor 1.8?

I've been using numtel:pg package for several projects in Meteor. Since Meteor version 1.8 the package isn't working correctly anymore. Anyone can point me to a solution?
The package seems to be abandoned, since there is no update in 4 years(!).
Trying to fix a package that is this outdated is usually not worth the effort. Your best options in this case are
find an alternative package for postgreSQL integration
find a fork of the package, that has fixed the compatibility issues
fork the package yourself and update the NPM versions or transform the package to run without hard wiring to a specific NPM version.
Resources to achieve that:
https://guide.meteor.com/writing-atmosphere-packages.html#peer-npm-dependencies
https://github.com/tmeasday/check-npm-versions
General Readings:
https://guide.meteor.com/atmosphere-vs-npm.html
https://guide.meteor.com/writing-atmosphere-packages.html
What to do if none of this applies to you, because
The alternatives require a lot of refactoring or even changes in the app architecture
There is no fork that keeps the package maintained
You are not skilled enough to fork and update the package yourself
First you should definitely open an issue on the repo and describe your problem as detailed as possible:
Meteor version, postgre version
Meteor version, postgre version where everything worked
What errors do you exactly get? Best is adding a stacktrace, if possible.
if the "error" is rather undesired behavior (not reacting, things disappear etc.) you need a very detailed description of what you did, what you expected, what (not) happened
Add screenshots if possible
Create a minimal repository that reproduces the error/issue and upload it to github; link it to your issue description
Note, that the points above also apply on Stackoverflow as criteria for a "good question". If the repo owner does not respond after a week you may trigger her attention by using #nameOfOwner in the comments.
More resources can be found here:
https://stackoverflow.com/help/how-to-ask
https://stackoverflow.com/help/mcve
By doing all these efforts you raise the chance of some community members to pick up your error (because there is less effort to reproduce when the error is documented well) and fix the issue or fork the repo.
Last but not least the golden way would be to deal with the issue, read about the package and how it works, check the code and try to fix it. Write some tests, document the fix and finally open a pull request in order to share the improvements with all the other package users.

What can be done to improve the speed of Meteor development tool?

I am new to Meteor. I just start using Meteor for the first time. It seems a bit slow. I am developing on a Macbook Pro with SSD. I am using Firefox, and Vim. I tend to save the files frequently, causing it to refresh the browser. It takes a few seconds for the browser to refresh. I do not have much code yet. Are you experiencing this slowness? How can this be improved? Does MDG have any plan to improve this? If I make change to a file, can we cause the browser to just reload that file rather than doing a full build and causing the browser to reload all the files? Am I missing something? Thank you!
Unfortunately, you can't do anything about it. However, the complete rebuild takes place only when you change the server side code. For any client changes, the change is almost instantaneous.
You can try some workarounds by removing some packages from meteor but they will affect debugging.
There is a great thread on this issue on Meteor's Github Issue section. You can find it here
Also, you may consider using WebStorm as it is the only IDE with built-in meteor support. May help you speed up your dev time.
UPDATE: You may try using import functionality in Meteor 1.4 to define file dependencies. Also, have a look at this package

Automatically log changes to system files and allow revert

I'm trying to learn about the guts of Unix right now, mostly through experimentation. When I was first starting, I found myself looking through forum posts, copying and pasting bash code. When I broke something, I often had to do a fresh install because I couldn't remember what exactly I had changed where. Now, the simple solution is to record a log of all the system files I've changed and keep original copies of all the default files so I can revert if necessary. It would be great if there was a cl tool which did this for me automatically. It would be even greater if I could step back through changes. Basically, I'm looking to version control my entire OS.
Does anything like this exist? I would also accept alternative strategies for spelunking through Unix without causing permanent damage if you think I'm going about this wrong.
Using debian if it matters.

How can I uninstall Win32 assemblies and cleanup WinSxS?

After a lot of trial and error (mostly due to lack of documentation and examples) I have managed to create MSI installers that install custom DLLs to WinSxS as side-by-side assembly. There is only one problem: Uninstalling leaves all files (DLLs, manifests and catalogs) in the WinSxS directory. How can or should I best clean that up? I know for sure that nothing else references it.
I have read somewhere that WinSxS has a self-scavenging process that cleans up over time but I could not find more information about that. Can you manually invoke this to clean up stuff?
The only other way I see is manually deleting those bits. First you have to change the owner of all files (assembly, catalog, manifest and their respective directory) from SYSTEM to an administrator account, adjust the permissions and delete them. There are also pieces left in the registry (I think HKLM\COMPONENTS\DerivedData\Components may be one place), but since WinSxS should be treated as opaque it is hard to find any information.
Scavenging isn't exposed anywhere that I know of. I'm not even sure when it is kicked off automatically. Maybe on uninstall of a service pack? Maybe some tool admins can run? I really forget.
Anyway, my suggestion is don't fight it. There are so many twisty turns down there that it just isn't worth trying to get the disk space back. Once uninstalled the bits still in the SxS cache will not be activated so they are just wasting space.
It's a dumb design but blame Microsoft and don't try to overcompensate.
Here is an article, it's kinda complete guide to WinSxS.
So, shortly, you can only uninstall some components (all their versions are in this folder), and you can run Service Pack bridge burning utility (in Vista it is named VSP1CLN.EXE and shipped with SP1). Note, that after execution, you shouldn't be able to uninstall SP or any components to state, prior to SP release date.
No-one is convinced you can - short of a complete reinstall, your bloaty WinSxS directory is there to stay.
There's been a long "discussion" of the problem on technet.
There is no documentation of the format, or any instructions how to remove files that are no longer needed - MS seems to think that disc space is cheap. There is a self-scavenging feature, but no-one's convinced it works, or if it does, it is very conservative (as you'd hope as you don't want it to break your OS)
You can tell is the scavenger is working by checking the "C:\Windows\winsxs\Temp\PendingDeletes." folder, as this is where files are moved by windows update or an installer moves them to - the scavenger just deletes the files in here.
You'll notice that after you uninstall your assembly, while the files are still there, they can no longer be bound to - so they are just "staged", or cached, but not really installed.
Rob & gbjbaanb are correct - you cannot manually invoke a scavenge yourself. Don't try to delete the files yourself - there are multiple places in the registry where they are registered, DerivedData\Components being only one of the many references.
I think the rule for Vista is scavenging is kicked off by the TrustedInstaller service after 10 minutes of machine inactivity, after the last servicing operation (service pack, hotfix, etc). But it's very fickle, so it doesn't run as often as it should. So just be patient, and the files will disappear on their own.
Well i was having some issues as i have an 80GB SSD for my windows and the WinSxs folder was about 12gb's
I was searching the net and i found this command:
DISM.exe /online /Cleanup-Image /spsuperseded
And now my WinSxs is 7gb which was wonderful news.
There are a few updates regarding the cleanup method that apply to newer OS. Check http://www.karafilis.net/winsxs-cleanup

Resources