Generate site when headless cms modifies database - firebase

I've been reading about how nuxt can generate a static site when a client makes a request to view the website. We are planning to build a headless cms to migrate the database with the data the website needs. This data will only be changed when you save it in the headless cms.
My question is since this data will only change when it is changed in the headless cms. Isn't it possible to just generate the site when it is modified from the headless cms, and then serve that site to the client? To reduce server costs.
Is it possible to do this with nuxt? Or are there any possibilities to do this?
We are planning on using Firebase as a backend.

There's nothing explicitly preventing Nuxt from being rebuilt each time you change an item in your DB. The part that matters is how you tell your app to rebuild itself.
By far the simplest way is using some sort of "build hook". See Netlifys docs here for a quick overview of what they are. However, this only really works if you're using a headless CMS that can send hooks on save, and a build provider that can trigger builds using those hooks.
You will absolutely save on server costs using this sort of method, but beware: if you have a lot of user generated content triggering builds, your build cost can easily outweigh the server costs. You also need to be aware that builds generally take a few minutes, so you won't see instant changes on your site.
The other option you have is foregoing static site generation in favour of SSR, which can dynamically load and render your content, completely avoiding the need to build every time a new DB change is made. This is what I'd consider the best alternative if you do indeed have a lot of user generated content.
It's hard to give any further advice without knowing the specifics of the CMS or build provider though.

Related

Use different data for production and develop firebase sites

I have a CI/DC pipeline with google cloud build triggers that deploy my code to different sites depending on which branch I push to. The develop site is a live test - the final check before I merge to master, which triggers a deploy of master to the production site.
Currently, both sites use the same firebase Firestore db, and any document changed on the develop site will also be changed on the production site.
What I want to avoid is creating another firebase project to push the develop code to with a different database, because that means I need a separate set of credentials and would copy the same functions over to the new project every time I change them. That's not maintainable and is a lot of work.
What I would like is some way for the develop site to only have access to part of the firestore database, and the production site to have access to another part.
How do people do this? Is it even possible? Is there a better way? One alternative I can think of is using authentication and creating separate accounts for testing with different access permissions, but this seems a work-around and not the ideal solution.
What you're trying to do sounds like a lot more hassle than using multiple projects, which is the documented and strongly preferred solution. Putting everything in one project is a huge anti-pattern in Firebase and Google Cloud, and it will cause you more problems in the long run, in addition to increasing the risk of catastrophic failure if you manage to misconfigure something in that one project.
It's perfectly maintainable to have multiple projects like this, if you apply some scripting to automate the work. This is very common, and I strongly suggest thinking through how this would work for you.
You CI/CD pipeline could definitely check out your updates from source control and deploy them to whatever other project environments you have set up. It's very common to manage different credentials and configurations for use in CI/CD.

How do you make an R coded program running on AWS accessible from your website?

In the creation of a simulation for our company, we coded the entire thing in R. It runs on AWS, and the consumers have been given links that route to the AWS page. Our website, however, is currently running off of Wordpress. In order for our customers to be able to access the product, we need to find a way to connect the product to the website. We would hence like to replace the current site with a new one that allows users to access our simulation from the website.
The only option we’ve come up with is to create a separate domain that has the interface built into the R program, and have a link to that domain from the current website. However, we would prefer to have a more direct solution.
Do any of you have any suggestions as to how we might achieve this?
Thanks for your time!
This answer very much depends on your code, but I think you have several options.
Run on external website
Pros:
Full control over code, easy to update without risking changes to main site
Easily accessible either by directly linking to it, or using <iframe> (HTML) on your main website, no Wordpress support required!
Cons:
Separate domain, some extra costs (?)
Shinyapps.io
Pros:
Easily publishable, often free
Cons:
Available for mostly everyone, which might not be ideal in a business situation
Less control over the platform
EDIT: I wanted to add that you can host your own shiny applications, and build the front-end using HTML. This gives you some more control.
AWS
Pros:
You should be able to set up an instance where the simulation is run on a subdomain that is not directly tied to Wordpress, e.g. outside of the main Wordpress folder.
As I said, the ideal solution depends on your code. Does it take user input, does it need to save files often? What kind of access control do you need?

CMS - How to work with multiple environments? Do I really need them?

I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.
Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.

Modular Application Design

We currently have an application that is usable by several clients, it is used to download and store data from our application that they have on their environment.
We have a need to pass this application over to a developer but at the same time, we need to protect our code. The way that I see it working is that we would like to some how consider our current app a framework, allowing another app to be created on top of it, but the app may have its own screens, but re-use some of the built-in screens.
Is it possible to protect our app in such a way with out rewriting everything into protected DLL's? Or should we just suck it up and share our code with consulting firms that want to build these types of apps for our clients?
If your proprietary code is entirely focused on downloading and storing data. You could create an online REST api that returns the data over the internet. The other developer could then just request the data from your servers using an HTTP call.
However if your code needs to be client-side, the only real thing you can do is compile a DLL, and even then that can be decompiled.

What is the best architecture for file management website?

Here is what I think my website should be able to provide to user.
Ability to upload file to the system. It should not blocking, user should be able to surf other pages of the website while upload is ongoing. Once upload is done user will get notified about upload.
User should be able to view of his/her uploaded files in website.
Ability to edit files in web browsers using third party APIs
Number of user are going to be around 5000, and all of them might upload files at the same time so performance should not decrease.
Where should I store this files? How to make sure that read and write of files on this directory should handle concurrent user request?
Considering above points. What should be the best way to architect this website?
Are there any existing web framework that play along with this type of architecture like rails, express?
If you want to have the ability to browse the site while a file is uploading, you'll want to use something on the front end that overrides anchor tags and asynchronously fetches the next page - there might be a library or something to accomplish this but it should be easy to implement yourself with jQuery.
To make this easier (and for many other reasons), you'll almost definitely want to structure your site with an MVC (Model View Controller) architecture. Rails is structured this way, as is almost any web framework. It doesn't sound like what you're describing is better suited to Rails over PHP or Python etc so just use whatever language or framework you (or your developers) feel most comfortable with. You might want to do some research into available plugins for editing files (it really depends on what type of files you want to edit and how) and using those to influence your decision on which language to choose as well.
With regards to storing files on your server, any logical system should suffice. Perhaps:
/username/year/month/day/myFile.txt
You'll want to do something to ensure filenames don't clash as well. And obviously you'd want a database storing the information linking files to users.

Resources