What are the specifications of the free Meteor server? - meteor

What are the specification of the free server where my meteor app runs when I do this.
meteor deploy myapp.meteor.com
Specification in terms of
Storage size
Max bandwidth
Max Connections
Processing limits

At the moment from what they're saying on the meteor-talk group, there aren't any enforced limits of any sort. Your app just sort of scales itself alongside all the others hosted there.
There is only one thing though, if your app isn't used/has no visits for a few consecutive hours its 'killed'. When someone visits it next time its put back up (of course the end user wont notice this, to them its as if it were up all along)
But what it means is background processes that you use/cron type tasks don't work well because the meteor deploy server will kill your app's task silently until the next web request comes along.

Related

Firebase Functions, Cold Starts and Slow Response

I am developing a new React web site using Firebase hosting and firebase functions.
I am using a MySQL database (SQL required for heavy data reporting) in GCP Cloud Sql and GCP Secret Manager to house the database username/password.
The Firebase functions are used to pull data from the database and send the results back to the React app.
In my local emulator everything works correctly and its responsive.
When its deployed to Firebase Im noticing the 1st and sometimes the 2nd request to a function takes about 6 seconds to respond. After that they respond less than 1 sec. For the slow responses I can see in the logs the database pool is initialized.
So the slow responses are the first hit to the instance. Im assuming in my case two instances are being created.
Note that the functions that do not require a database respond quickly regardless of it being the 1st or 2nd call.
After about 15 minutes of not using a service I have the same problem. Im assuming the instances are being reclaimed and a new instance is being created.
The problem is each function will have its own independent db pool so each function will initially provide a slow response (maybe twice for the second call).
The site will see low traffic meaning most users will experience this slow response.
By removing the reference to Secret Manager and hard coding username/password the response has dropped to less than 3 seconds. But this is still not acceptable.
Is there a way to:
Increase the time that a function is reclaimed if not used?
Tag an instance that it should not be reclaimed?
Is there a way to create a global db pool that does not get shutdown between recycles?
Is there an approach to work with db connections in Firebase Functions to avoid reinit of the db pool?
Is this the nature of functions and Im limited to this behavior?
Since I am in early development, would moving to AppEngine/Node.js (the Flexible Plan) resolve recycling issues?
First of all, the issues you have been experiencing with the 1st and the 2nd requests taking the longest time are called cold starts.
This totally makes sense because new instances are spun up. You may have a cold start when:
Your function has been deployed but not yet triggered.
Your function has been idle(not processing requests) enough that it has been recycled for resources.
Your function is auto-scaling to handle capacity and creating new instances.
I understand that your five questions are intended to work around the issue of Cloud Functions recycling the instances.
The straight answer from questions 1 to 4 is No because Cloud Functions implement the serverless paradigm.
This means that one function invocation should not rely on in-memory state(database pool) set by a previous invocation.
Now this does not mean that you cannot improve the cold start boot time.
Generally the number one contributor of cold start boot time is the number of dependencies.
This video from the Google Cloud Tech channel exactly goes over the issue you have been experiencing and describes in more detail the practices implemented to tune up Cloud Functions.
If after going through the best practices from the video, your coldstart shows unacceptable values, then, as you have already suggested, you would need to use a product that allows you to have a minimum set of instances spun up like App Engine Standard.
You can further improve the readiness of the App Engine Standard instances by later on implementing warm up requests.
Warmup requests load your app's code into a new instance before any live requests reach that instance. The last document talks about loading requests which is similar to cold starts in which it is the time when your app's code is being loaded to a newly created instance.
I hope you find this useful.

502 BAD GATEWAY GET / TASK TIMED OUT AFTER 10.01 SECONDS

I am using Next.js and express as front end and back end server. Next.js hosted on the Zeit Now, express app hosted on Heroku.
If I go to express app, I can make sure that it's working correctly and its connection to mongodb works fine as well.
When I hit index page of Next.js through Zeit, it seems to be hanging on the GET / tasks more than 10 seconds.
I am only calling 3 end points just GET methods from index.js of Next.js app. This shouldn't be hanging the whole application.
If I go to my server independently, which only takes less than 3 seconds or so to give back JSON data.
I also looked at function tab Zeit provided, but it won't show what exactly serverless function was failing.
So it is hard for me to debug this. I also set whitelist all IP from Mongo. So the database should be fine.
If anyone dealt with this before, please let me know.
My site is https://www.yaobaiyang.com
Issue happens unexpectedly, you may or may not see this error
I had the same problem on my website:
Check the limit on your plan https://vercel.com/docs/v2/platform/limits (Especially if you're in free plan, you will have some limits.
For the problem was an uncatch error, the lambda crashed and wait the timeout.
Through more understanding, the problem may appear on Heroku, Heroku's free plan is 1000 dyno hours, which is the usage time in general, and then within the 30-minute time limit, if there is no access, the server will go to sleep. . There may be a delay in reawakening, which usually takes longer than when it was active. If this is the problem, the solution is to use
Similar to pingdom or cronjob, a regular automatic request interface, request my Heroku periodically in less than 30 minutes to keep it awake.
Use a VPS like digital ocean or Vultr that runs 24 hours a day, 365 days a month, and then deploy my Node, Nginx, Http/2, etc.
Upgrade Heroku's plan to cancel the 30-minute sleep
Upgrade Zeit has more timeout.

Understanding App Insights end to end for occassional long response times

Background: I have an ASP.NET Core App and have an API method that takes a file name of a blob that the frontend has uploaded to Azure Blob. It then needs to create a thumbnail version of the blob and return the name of the newly uploaded thumbnail Blob. Sometimes, for exactly the same file size it can take up to 40 seconds to complete. Mostly, it's around 400ms.
Below is the end to end from App Insights, I have a few things I don't understand:
1) The request duration is 37.5 s but yet the other operations add up to nowhere near this time
2) Why are there calls to master db? We are using EF6 with multiple contexts
3) The app is using an Azure App Service and SQL Azure. I don't understand why the response time is so inconsistent.
Any help would be much appreciated!
I've noticed multiple time that the first request after an application is deployed to Azure or after a long period that no requests were made to the application, it takes significantly longer to get a response.
As far as I remember it was related to start-up time of the site (if you're using an App Service on Windows based underlying VM it still uses IIS as a reverse proxy).
I solved the issue by configuring health checks that occasionally perform requests to the app.
Also, in addition to Application Insights (which logs information only after the application has started), you can try the tools listed here to see more information.
Hope it helps!
1.
The way the request timeline is displayed gives you only the time-span for the whole request (37.5s) and the individual time-spans for each dependency.
A dependency being another call that sends its run-time to the application insights.
In your example each call to the database is automatically tracked as a dependency. The code running after each database call is not though.
So e.g. requesting a database entry which takes 200ms and then issuing a Thread.Sleep of 2 seconds and requesting another database entry which takes 300ms would result in a 2 second gap between the two database-call dependencies which will each be listed with 200/300ms respectively.
You can use TelemetryClient.TrackDependency to wrap parts of your own code into its own dependency. This way you will see your own code as an entry on the request timeline.
2.
Depending on your EntityFramework database-initialisier EF will connect to the master db on context creation. (E.g. to create the database if it does not exist).
3.
Try tracking your own code to find out what parts of it are slow. EF has a few performance issues to consider, try to understand the performance caveats of the libs you use. If your calls are inconsistently slow it might be an issue with resources being over-utilized or caches being emptied too early (like for EF warm vs. cold queries).

Need Firebase Database behaviour clarification when inside a Service

I am testing a feature which requires a Firebase database write to happen at midnight everyday. Now it is possible that at this particular time, the client app might not be connected to the internet.
I have been using Firebase with persistence off as that can potentially cause issues of stale data in another feature of mine.
From my observation, if I disconnect the app before the write and keep it this way for a minute or so, Firebase eventually reconnects when I turn on the connectivity again and performs the write.
My main questions are:
Will this behaviour be consistent even if the connectivity is lost for quite a few hours?
Will Firebase timeout?
Since it is inside a forever running service, does it still need persistence to ensure that writes are not lost? (assume that the service does not restart).
If the service does restart, will the writes get lost?
I have some experience with this exact case, and I actually do NOT recommend the use of a background service for managing your Firebase requests. In fact, I wouldn't recommend managing Firebase requests at all (explained later).
Services, even though we can make them run forever, tend to get killed by the system quite a lot actually (unless you set their CPU priority to a higher level, but even then the system still might kill them).
If you call a Firebase Write call (of any kind), and your service gets killed, the write will get lost as you said. Unless, you create a sophisticated manager in which you store requests that haven't been committed into your internal storage, and load them up each time the service is restarted - but that is a very dirty work to do, considering the fact that Firebase Developers took care of us and made .setPersistenceEnabled(true) :)
I know, you mentioned you don't want to use it, but I STRONGLY advise you to do so. It works like charm, no services required, and you don't have to worry at all about managing your write requests. Perhaps it would be better to solve the other issue you have in order to make this possible.
To sum up, here's what I would do in your case:
I would call the .setPersistenceEnabled(true) someplace at the beginning (extending the Application class and calling it from onCreate() is recommended)
I would use Android's AlarmManager and register a BroadcastReceiver to receive an alarm at midnight (repetitive or not - you decide)
Inside the BroadcastReceiver, I'd simply call a write function of Firebase and worry about nothing :)
To make sure I covered all of your questions:
will this behaviour be consistent....
No. Case-scenario: Midnight time, your service has successfully received the call and is now trying to write into Firebase. If, for example, the user has no connection until 6 AM (just a case scenario), there is a very high chance that the system will kill it during those 6 hours, and your write will get lost. Flight Time, or staying in an area with no internet coverage - both are examples of risky scenarios that could break your app's consistency
Will Firebase Timeout?
It definitely could, as mentioned. I wouldn't take the risk and make a 80-90% working app. Use persistence and have a 100% working app :)
I believe I covered the rest of the questions..
Good luck!

ISessionFactory recreates after app pool recycles

My shared hosting provider set up IIS recycle app pool every 3 minutes for idle.
So my session factory often recreates (at application startup). As I have about 70-100 entities it takes about 2-5 seconds to construct factory. So cold start of my application is rather long. I haven't access to IIS setting.
You can offset a lot of the cost of setting up your factory by generating your proxies at build-time instead of runtime. This article explains the steps how.
Being realistic, the simplest change is to ask that the app-pool isn't recycled so frequently (since this is an expensive operation for your application). I'm sure they've set the timeout very low as a "performance" setting, but really this is generating work and slowing things down.
You might not have access to the IIS settings directly, but this shouldn't stop you from contacting your supplier's technical support and getting it resolved.
If you are in a full trust environment (doubtful, but provider may be willing to work with you on this), you can try serializing your configuration so it doesn't need to be rebuilt each time. Merging all your entity mappings into a single XML doc can help also (just do this as build step so its not a nightmare to work with mappings).
More info here: http://nhibernate.info/blog/2009/03/13/an-improvement-on-sessionfactory-initialization.html
Have you tried to stop your site from being idle in the first place? I use uptime robot that is FREE and pings your site every 5 minutes. The benefit of this service is that it only requests the headers of the page you set up as a monitor and therefore does not affect logging such as Google Analytics.
However said you will need to test this to see when your app does indeed recycle to see if uptime robot works with your shared hosting provider. The best way is to log every time the session factory is re-built.
not much you can do. app pool recycle shuts down your app...
I guess you could try to fool the recycler by having the application do something every 2:45.

Resources