Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
With classic REST apis it is good practice to add a version to the api url. This version can be fi. embedded in the path (api.myservice.com/v1/dataset) or as a parameter (api.myservice.com/dataset?v=1). When a new version of the api is deployed, it can live side by side with the old version as long as it is needed. Old versions of the API can be marked as deprecated and can be removed eventually.
This gives the frontend a grace period to adapt to the new version of the API, so there is no downtime between an update of the backend, adapting this by the frontend dev and updating by the frontend user.
When we use Firestore or any similar realtime database, the frontend can have direct access to the database. The structure of the database can change, columns or tables can be renamed, moved or deleted. There is no API that abstracts this underlying structure for the frontend. So, what is the best way to add some kind of versioning to frontend - backend communication using realtime databases?
Possible solutions:
Use a REST api anyway as an extra layer with a version included. Disadvantage: with this approach you lose the advantages from a realtime database, such as realtime updates and user management.
Move the abstraction layer to the frontend and expose a minimum required version. If the frontend does not meet this version, the frontend is forced to update. Disadvantage: the frontend is trusted to do the right thing, instead of enforcing it.
Add the version to the project name or the table names. This will result in a lot of extra redundancy, where data have to be kept in sync constantly. This may lead to extra costs and is prone to errors.
Any other?
Neither of these options seem like good ideas to me yet. What will be the best solution if a frontend has direct access to the data? I'm aware that this question can be quickly be flagged as 'too broad'. If it is, please advice me how to focus my question.
The typical approach I take is to put a version number of the data model in the database. Whenever a schema change to the database is required, I check if it can be kept backwards compatible. If not, increment the version number.
Either way, the schema is encoded in the security rules of my database. This means that there's no way for a client to write invalid data, as it will be rejected by the security rules.
The clients read the version number, and show a Please upgrade when the version number is higher than what they were built for.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
There is a warning on the Firebase best practices documentation against using Firebase with multi-tenant applications: https://firebase.google.com/docs/projects/learn-more#multi-tenancy
This is what I am most concerned about: "Multi-tenancy can lead to serious configuration and data privacy concerns problems, including unintended issues with analytics aggregation, shared authentication, overly-complex database structures, and difficulties with security rules." Identity Platform looks like it should cover everything except analytics aggregation and database structures, but I can control analytics logging and my database structure is simple enough, being divided cleanly by tenant. My application is one common application, but has tenanted client data and users (managed via Google Identity Platform).
There is also plenty of official Google documentation supporting the use of Firebase for multi-tenancy: https://cloud.google.com/identity-platform/docs/multi-tenancy-authentication . There are also dozens of examples out there for how to set up multi-tenancy with Firebase and Google Identity Provider.
Do you know why they would have these conflicting recommendations and examples? Does use of Google Identity Platform fix the core security deficits mentioned in the warning? It has me strongly considering abandoning Firebase, which would be a shame given the features it gives me.
The recommendation is not bind to Firebase, or GCP, or Google. It's generic. If you put all your data in the same bag, with only a logical isolation, it's only logical, not strong as different projects.
Thus, it's easy to make a mistake and to use, delete, update, make the mess, in all the tenant data. In case of attacks, leak, major bug, you can reduce the blast radius by having several small tenant.
It's a tradeoff between more management to perform (because you have a lot of tenant) and a higher risk (multi-tenant project, the crash is dramatic). It also depends on your application type and context. It's a recommendation, not an obligation!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have below applications:
a magento2 e-commence site with Restful API in docker
some nodejs micro service Restful API in docker
i have question, if i wanna to develop a ecommence + user portal frontend site, which framework i should pick? NEXTJS or GATSBY?
given that:
i have over 30000 user
I need to serve more than 10 locales, and may keep increasing
I need to serve more than 10 countries , and may keep increasing , and each of those may many different localisation setting
i have over 10 store represent 10 countries , each of them has more than 20 products, and may increasing
I do not agree with Zain Ui Hassan's answer.
With Gatsby, it doesn't matter the amount of asynchronous data your site will have, the number of pages, or the dynamic data. In the end, it is a React site with all the content already fetched and served, so it's blazing fast. Moreover, you have a bunch of official plugins that manage all your needs.
You will be able to handle a S3 AWS deploy.
Multilanguage support with redirections included and dynamic routes
CMS fetched data with a bunch of multiple CMS support (Contentful CMS, DatoCMS, Strapi, Netlify CMS, markdown files, JSON files, custom database, etc).
Lambda support.
You don't need a Node server to deploy or view a Gatsby website since it renders a static HTML, so you don't need any extra configurations, just a server, all pages are created in the build time. Next.js needs server-side customization and rendering.
In addition, it's SEO-friendly, you can easily customize your components to render (even when the page is already created) with the proper country-oriented data.
In the end, it's completely up to you, but in my opinion, you will need fewer configurations and you will have less trouble using Gatsby, due to the few and easy configurations.
In terms of community, both have great support so it's a tie.
Personally, I think that the only area where Next.js would be the better option at the moment is scalability since Gatsby, especially in large-scale projects, will increase the deploy time (up to 10 minutes, which is not ideal) but I know they are working on improving this by implementing incremental builds. I reduced my deploy time from 8 minutes to 2.
but i do want to know more, if a page path that depends on user, e.g
/myinbox/letter-from-tom-to-stanley, each user may have different msg
on their inbox, and each inbox may have different path depend on user
This will depend on your code logic rather than the framework used. Of course, you are able to achieve this both with Gatsby and Next. I'm doing similar stuff with Gatsby and I have no issues. Of course, you will need a back end logic sometimes (database stuff), but it's completely doable.
Answering your question. It's a personal choice and you can fit your specifications with both (like the other question shows). I would choose Gatsby because it's more oriented towards SEO (conversion), easy to maintain if it's well structured (data-entity in CMS, etc), the plugin support, and the fewer (minimum) server configurations since you are uploading a /public compiled folder.
Useful articles:
https://dev.to/jameesy/gatsby-vs-next-js-what-why-and-when-4al5#:~:text=JS%20is%20mainly%20a%20tool%20for%20server%2Dside%20rendered%20pages.&text=Gatsby%20can%20function%20without%20any,HTML%20page%20from%20the%20server
https://www.gatsbyjs.org/features/jamstack/gatsby-vs-nextjs
https://medium.com/frontend-digest/which-to-choose-in-2020-nextjs-vs-gatsby-1aa7ca279d8a
Gatsby is used for static websites where you don't have lots of dynamic data. If you are building complex level website go with NextJs.
Next.js has the best-in-class "Developer Experience" and many built-in features; a sample of them are:
An intuitive page-based routing system (with support for dynamic routes)
Pre-rendering, both static generation (SSG) and server-side rendering (SSR) are supported on a per-page basis
Automatic code splitting for faster page loads
Client-side routing with optimized prefetching
Built-in CSS and Sass support, and support for any CSS-in-JS library
Development environment which supports Hot Module Replacement
API routes to build API endpoints with Serverless Functions
Fully extendable
https://nextjs.org/
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm a junior full stack and I'm working on a big SPA (for one man) project, as a challenge and demo of what I have learned and I have 3 questions:
Is generally ASP.NET Identity used in companies for medium/big projects or they usually go with custom implementation ?
I wonder if it's worth using ASP.NET Identity for users and roles management or is better to create custom logic for users/roles to I guess learn more and have more control.
If I'll continue developing with identity, will be bad if I use it only for users and roles management, as I saw that it has authentication API too, but I use OAUTH2, setting token was like fast and it's working. So should I try to use Identity as much as I can in authentication too to explain the use of this framework over custom implementation ?
You can answer only to first question because other two are too subjective. Thanks!
As ASP.NET Identity is very customizable, you can take the best from both worlds.
In my company we use custom implementation of IUserStore giving us the flexibility to persist user info the way we wanted. We don't use Entity Framework, for example, which is the default data access used by ASP.NET Identity.
In our case the tables are different and they better match to actual user data for our application (read business objects).
The password hashing/verification process is different also, etc.
You just need to pass an instance of your custom IUserStore to ApplicationUserManager and you are good to go.
My personal opinion is: go with ASP.NET Identity and replace just the parts you need.
EDIT:
You can also implement all of those too
IUserStore<,>
IUserLoginStore<,>
IUserClaimStore<,>
IUserRoleStore<,>
IUserPasswordStore<,>
IUserEmailStore<,>
IUserLockoutStore<,>
IUserTwoFactorStore<,>
IQueryableUserStore<,>
We use it for authentication too. Have in mind that this is tested and will be updated. Is is also well documented and any new devs that are jumping on the project have a greater chance to know what is happening. If you go with a completely custom solution you'll have to maintain that and try to keep it updated with the latest trends/stuff.
Hope this helps to make a better decision.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It is unclear to me how cloudControl MySQLd addon works.
My understanding of MySQLd is that it is a MySQL server that can/will work with unlimited apps.
But since all addons are only app based, this could also mean that I cannot use the same MySQLd server on multiple apps.
Could anyone please help me understand if one MySQLd instance can be used with multiple apps hosted on cloudControl?
There are two concepts on the cloudControl PaaS. Applications and deployments. An application is basically just grouping developers and deployments together. Each deployment is a distinct running version of the app from a branch matching the deployment name. More details on this can be found in the Apps, Users and Deployments documentation.
All add-ons are always per deployment. We do this because this way we can provide all credentials as part of the runtime environment. This means you don't have to have credentials in version controlled files. Thich is a huge benefit when merging between branches, because you don't risk accidentally talking to e.g. the live database from a dev deployment. Also add-on credentials can change at any time at the add-on providers discretion.
For this reason separation between deployments makes a lot of sense. Usually your dev deployments also don't need the same database power as the production deployment for example. So you can easily use a smaller plan or even a shared database (e.g. MySQLs) for development. You can read more about how to use this feature inside your code in the Add-on documentation.
Also as explained earlier, add-on credentials are always provided as part of the runtime environment. Now credetials can change at any time at the add-on providers discretion. These changes are automatically provided in the environment and the app processes restarted. If you had hard coded the credentials as would be required for the second app, this would mean the app would probably experience downtime.
Last but not least, it's usually very bad practice to connect to the same database from two different code bases in different repositories, which would be the reason to have a second app. This causes all kinds of potential conflicts and dependencies that make code changes and database migrations extremely hard to maintain over time. The recommended way would be to have the data owned by one code base only and provide an API to access that data from the second code base.
All this being said, it is technically possible to connect multiple deployments or even apps to the same add-on (database or anything else) but highly advised against.
If you have a good reason to connect two apps/deployments to the same database I would suggest you manually launch an RDS instance at Amazon (MySQLd is based on RDS) and provide credentials for that through the custom config add-on to both of your apps/deployments.
I hope this answers your question and also explains the reasons.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a customer who is being dogged pretty hard by SOX auditors regarding the deployment practices of our ASP.NET applications. Care is taken to be sure to use appropriate file- and folder-level security and authorization. Only those few with deployment privileges can copy an up to the product server (typically done using secure FTP).
However, the file/folder-level security and the requirement of secure FTP isn't enough for the bean counters. They want system logs of who deployed what when, what version replaced what version (and why), and generally lots of other minutiae designed to keep the business from being Office Spaced (the bean counters apparently want the rounded cents all to themselves).
What are your suggestions for making the auditors happy? We don't mind throwing some dollars at this (in fact, I think we would probably throw big dollars at a good enough solution).
You probably want to look at an automated deployment solution and you are going to need a formal change control process. We use anthill pro. It can track what version and when it was deployed.
To satify sox we had a weekly meeting of what was getting deployed when. It had to be approved by compliance manager and each deployment needed to have a form filled out explaining what, why and how something was being changed. Once the form was filled out a third person had to be involved (not the person requesting or approving, neither of them can have access to the production environment, because of the seperation of duties rule you have to follow) to make the change and the change was based off of what was in the "change document" no outside communication from the person making the request. Once deployed, all people had to sign off that it was done and when.
It shouldn't be too hard to meet the requirements, it might require some changes to your development processes but it's definately possible.
What you need is:
A task tracking system, showing descriptions of work, and approvals
The ability to link documents, as well as packages to this system.
A test system to test your deployments onto.
Finally all deployments must be done via installation packages, and other scripted means.
Any manual changes must be documented and approved too.
Also turn on auditing, run regular security tests, and document almost everything.
All of this is possible with a number of systems, the biggest change is the changes to your internal processes.
You might want to take a look at the auditing features provided by NTFS.