I plan to host a Next.js website on Digital Ocean's App Platform and have recently found out that hosting a static site on Digital Ocean is free (up to three websites). Would this also apply to websites that use incremental static generation?
Unfortunately, no. Incremental Static Regeneration implies that you run a Next.js server (which is based on Node js) which knows when to regenerate (recreate) your pages. Without the server it's impossible to create new files or update old ones.
And hosting just a static site usually implies that you only provide static files, without the need to run the server. In Next.js terms it's next export command which will generate static files, but those won't be regenerated unless you run a server.
If your site does not change frequently maybe you can just rebuild it every couple of days manually or write some script to do it.
Related
I came across a task in which you need to implement the dynamic generation of simap.xml files on the server side in Next.js, this can be done and there are libraries for this https://www.npmjs.com/package/sitemap and even a video https://www. youtube.com/watch?v=rIh-VelVzgc&t=2s only now they create a sitemap on the fly, I mean that the physical files in the project folder are not created, but I need the presence of PHYSICAL FILES after each generation, if I I'm not mistaken, there is no access to the house tree on the server side and, accordingly, we will not be able to generate files and place them in the public folder, but still, maybe someone has come across such a task, I will be grateful for any ideas, Internet searches are useless brought.
I'm developing a website backed by a static site generator which will build about 100K static HTML pages. Currently, my workflow is building the project on my local machine, and use an FTP tool to upload the output folder (about 40G) from my local machine to a remote production server, which is a long and painful uploading process, which could take about 24 hours.
I'm wondering if there's a recommended way to set up a better build & deployment process to make it faster and more automated?
With an extremely large number of pages, build times have been the crux of static-site generators. The solution is to defer generating some pages from build time to request time.
For example, you can only statically generate the top 10,000 most requested at build time to keep your build times low. Then, at request time you can use Next.js and Incremental Static Regeneration to build static pages.
Let's say a request comes in for one of the other 90,000 pages you haven't statically generated. Instead of getting a static site, the first request will hit the server to fetch the data and generate the static page. Then, that page is cached. When another person visits this page, they will see the static page (which is much faster than talking directly to the server).
You can also invalidate the cache using the revalidate flag. For example, you could make your page fetch new information every minute using revalidate: 60.
Deploying a Next.js app to Vercel using this setup should reduce your build & deploy times to less than 10 minutes, while still creating a performant static site. Simply git push to your repository and the GitHub integration will build and deploy your application for you. No more FTP!
Gatsby recently added a feature to fix this (or to try to), which is called incremental builds. Basically, it caches all your data and it will redeploy the changed files and the code changes.
Actually, it seems a feature restricted to Gatsby Cloud or other CMS, not available for a custom deploy server. Here's an example to achieve it in Netlify.
Before these recent changes in Gatsby's policy, you only need to add an environment variable to your command deploy:
GATSBY_EXPERIMENTAL_PAGE_BUILD_ON_DATA_CHANGES=true gatsby build
Keep in mind that this option is not suitable for all deployment servers. If you are regenerating your /public folder in each build from scratch it won't work.
Another option is to use a huge S3 machine of Amazon and kill it once the deploy is done.
The Issue
I am currently in the process of integrating a pre-rendering service for SEO optimization, however we use an Azure App Service Plan to scale up or down when necessary.
One of the steps for setting up the proper configuration requires placing an applicationHost.xdt file in the /site/ directory, which is one level above the /site/wwwroot directory where the application itself gets deployed to.
What steps should I take in order to have the applicationHost.xdt file persist to new instances spawned by the scaling process?
Steps I have taken to solve the issue
So far I have been Googling a lot, but haven't succeeded in finding a lot of documentation on using an applicationHost.xdt file in combination with an Azure App Service Plan.
I am able to upload the file to an instance manually, however I have assumed that when we then scale up to more instances the manually uploaded file will not be present on the new instance(s).
Etcetera
We are using Prerender.io as pre-rendering service.
Should there be an easier to set-up & similarly priced service available, we would be open to suggestions as we are in an exploratory phase regarding pre-rendering.
Suppose this won't be a problem, cause all files under azure app are shared between all your instances. You could check it in this Kudu wiki:Persisted files. And in my test all instances will keep the file.
About upload the applicationHost.xdt, you don't have to do it manually, there is a IIS Manager Site Extension to lets you very easily create XDT files. And it will provide some sample XDT's for you.
I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.
You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.
Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.
Hello I am a Appfog beginner and I want to ask if I upload picture/plugins/themes via the wordpress admin. Because appfog does not currently support a persistent file system, all the plugins/pictes/themes not in the source code will be lost. Is there anyway to backup the current live system and include these files in the source code that I upload? The download source code button or the "af pull" command will only download the last source code I uploaded not changes that where made for example when I install a plugin.
You can add a helper php script to your app like this:
https://gist.github.com/4134750
You can manually download single files using af files <appname> /app/<filename> but this would be painfull for your purposes.
You would be much better served by setting up your Wordpress installation to run locally using Mamp or Xampp. Pull your app as it is from AppFog, host it locally using Mamp, making your file system changes, then pushing those changes to AppFog.
Here are a few reasons why making changes locally then updating AppFog apps is better:
If your running multiple instances of your wordpress app, only one of them will get the installed plugin. Installing the plugin locally and pushing insures all instances get the plugin.
Its much faster to develop and test locally and you can see the results of your changes before impacting your live site.
Your live production site will not go down if your plugin install fails or somehow makes an unintended change. This is also true for Wordpress updates, do them local then push to production.
If your have the changes on your local box you can use version control to track and tag releases before updating production.
blue-green deployments become trivial. Have two production apps, a primary and a slave app. Update your code locally then update the slave and test it then promote it to primary by mapping the domain to it. Then you demote the previous primary to slave by unmapping the domain. The slave is always one update older and you can switch back two it if you discover issue with your primary.
Curating your Wordpress apps this way will allow you to take advantage of power the AppFog platform provides.
I found this script "zipit" even better than the "ls" script Sea Comet provided. This will zip up the entire live app directory and then you download it. This way, you can make changes via the wordpress admin, get it all working the way you want it, then use zipit, unzip the file and push it to your app on appfog and the state is totally saved across restarts.
https://github.com/zeroecco/zipit/blob/master/zipit.php
You can find more info in this blog post over on the old PhpFog blog:
http://blog.phpfog.com/2012/11/16/how-to-download-your-entire-application-not-just-code-from-php-fog/