I've read about fastrender, but does this apply to the hello world example that gets created after running meteor create? I've deployed a blank app to *.meteor.com and the initial load time is 2-3 seconds if I haven't cached anything.
Related
I have a shiny application that is published on shinyappsio. The application has a filter Reporter and it is loaded with a default Reporter. How can I cache only the first time the application loads for the default reporter and then every time someone accesses the application it will use the cache to load it?
I have a Shiny application that needs to load into memory some fairly large data sets. To save the users some time when browsing to the dashboard, I set the app_idle_timeout to zero (using the community version of the Shiny server application), as suggested in the docs. This works as expected.
However, the underlying data needs to be refreshed daily. Hence, what I would like to do is setting up a cron job that reboots the shiny server (or stops the relevant sessions) every day at 3am and then automatically initiates a new R session so that the data in the global.R is loaded into memory and the dashboard ready to consume instantly.
What I do not understand is how to initiate a particular Shiny application from terminal, i.e. mimic what happens when browsing to the URL of this app on the Shiny server.
Any suggestion would be greatly appreciated.
I have a locally hosted shiny application. The application pulls data from a data feed and allows plotting and elementary stats. However, I am having trouble getting this app to refresh. I tried setting Task Scheduler to run the following every 6-hours:
ip="10.30.27.127"
runApp("//COB-LOGNETD-01/LoggerNet/R_Scripts/shiny",host=ip,port=4438)
However, despite this executing, it does not refresh. Is there a way to get the shiny app to refresh automatically?
Can any one share a strategy to perform a bulk import of post on Wordpress which may last long for about 10-15 minutes? I have tried to insert putting in a loop, but it gets interrupted in half way throwing 500 internal server error.
I have tried the same script in our local hosting and found to be working fine.
The problem is in shared hosting in which the they have put limits on resources.
I am looking for a strategy that do batch processing on a single click, in which each batch is processed after a specified time gap so that it does not put load continuously on server resources. If you have better idea, please share.
I have been using meteor for quite awhile and have been deploying apps to .meteor.com. However recently after updating my app to meteor v0.8 and new collectionFS, the terminal states that the app has been deployed to whatever.meteor.com but when I go to the site, I see Meteor's Site is down.Try again later. I have narrowed it down to the new collectionFS package causing the problem, since my old app with the old collectionFS deploys fine. Any thoughts?
EDIT
The problem was due to the long startup time caused by my collectionFS path: definition.
There are several reasons why your site may not load when being deployed.
Site Inactivity
The meteor deploy service shuts down if your site hasn't been accessed in a while, and takes a while to start up again if it is requested, during which time you'll see that message.
In a few minutes after the first request, you should see the site come back up.
For more information, see this answer: https://stackoverflow.com/a/19072230/586086
Excessive Resource Use
Another reason your site can refuse to deploy is if your app takes more than 4 minutes to start or uses an excessive amount of CPU - it will get killed. Is it doing anything resource-intensive like that? For initializing really big databases, do the initialization locally and copy the contents using the url from meteor mongo -U yoursite.meteor.com.
I had to do this for the demo app for meteor-autocomplete. See the file upload-db.sh.
I had the same error.
But the following solution works deploying the app successfully.
meteor login
meteor deploy < available meteor sub domain name >
I know this is old but I was just having the same issue and removing the collectionFS package solved my problem immediately...in case that helps anyone.