Firebase payment gateways? - firebase

I'm currently evaluating whether Firebase will be suitable for an app I am making. The only potential sticking point I have found is taking payments - what are the options currently available?

Firebase is a real-time data store, focused on lightning-fast, scalable solutions for sharing data between hundreds to millions of clients simultaneously. It does not offer any payment processing solutions internally.
A third party service like Stripe will integrate quite easily with Firebase, and tools like Zapier can help with pushing data from Stripe back into Firebase upon completion of transactions.
Generally, the process looks something like this:
User initiates transaction on your site
Client code sends them to Stripe to enter their CC info
Client code obtains a token representing the secure transaction
A server process is notified by stripe when the transaction is validated
The sever submits the payment authorization with the token
Stripe sends a transaction receipt to the server process or Zapier, which would be stored back in Firebase
An advantage of this approach is that your are not storing any credit card or sensitive data and therefore are not subject to PCI compliance and stringent bank/e-commerce regulations.

Related

Can I use firebase to create following backend (is it possible)?

(Updated description)
Frontend: Android
Core requirements: I would like to write my own code and have it executed on the server. I want the whole backend to be automated (no admin creating tables in a database and inserting records). I still want to benefit from some basic BaaS functions like sending notifications to users, server maintenance, etc. to speed up the MVP development process.
Description of MVP functionality - survey app for entrepreneurs:
An entrepreneur adds the survey and information about it (questions, possible answers). It is sent to the server and saved. There are different variants of surveys (single choice, multi-choice, open questions, etc.), so a specific document has to be created automatically by the backend code. Analogically, the creation of a document for responses has to be handled by the backend. The same in the case of the document for the final results of the survey research.
The respondent receives a notification about an available survey. The mobile app retrieves information about the survey from the server and respondent completes the survey.
The application sends the respondent's responses to the server, the server saves the information.
X respondents perform points 2 and 3.
When the survey is completed (the number of respondents set by the entrepreneur is reached), the server processes the data, collected from all respondents and saves the results of the research (in the appropriate document).
The entrepreneur receives a notification about the completed research. The application downloads the results from the server.
Additional requirements:
Server has to be able to serve many entrepreneurs and respondents at the same time without any problems like data corruption.
No admin needed for creating tables or inserting records - Backend is 100% automated.
Certainly!
You could use your admin client side of the application to upload the questions, corresponding answers, answer response limit, and completion flag (Step 1).
You could then retrieve the data from your user side of the client app from Firebase Firestore, and have the users complete the surveys and upload the answers back to Firebase Firestore. (Steps 2 & 3).
Step 5 could be achieved by using Firebase cloud functions, which listen when a Firestore document variable has reached the response limit, it could then be marked as complete. Step 6 could also be activated in the cloud function, sending a notification via FCM to your admin client.
I know this answer doesn't go into any code specifics, just wanted to let you know that this is most certainly possible with Firebase :)
I would certainly recommend creating an admin client app in addition to the user client app, rather than placing them in the same app!

Microservices client acknowledgement and Event Sourcing

Scenario
I am building courier service system using Microservices. I am not sure of few things and here is my Scenario
Booking API - This is where customer Place order
Payment API - This is where we process the payment against booking
Notification API - There service is responsible for sending the notification after everything is completed.
The system is using event-driven Architecture. When customer places booking order , i commit local transaction in booking API and publish event. Payment API and notification API are subscribed to their respective event . Once Done Payment and notification API need to acknowledge back to Booking API.
My Questions is
After publishing the event my booking service can't block the call and goes back to the client (front end). How does my client app will have to check the status of transaction or it would know that transaction is completed? Does it poll every couple of seconds ? Since this is distributed transaction and any service can go down and won't be able to acknowledge back . In that case how do my client (front end) would know since it will keep on waiting. I am considering saga for distributed transactions.
What's the best way to achieve all of this ?
Event Sourcing
I want to implement Event sourcing to track the complete track of the booking order. Does i have to implement this in my booking API with event store ? Or event store are shared between services since i am supposed to catch all the events from different services . What's the best way to implement this ?
Many Thanks,
The way I visualize this is as follows (influenced by Martin Kleppmann's talk here and here).
The end user places an order. The order is written to a Kafka topic. Since Kafka has a log structured storage, the order details will be saved in the least possible time. It's an atomic operation ('A' in 'ACID') - all or nothing
Now as soon as the user places the order, the user would like to read it back (read-your-write). To acheive this we can write the order data in a distributed cache as well. Although dual write is not usually a good idea as it may cause partial failure (e.g. writing to Kafka is successful, but writing to cache fails), we can mitigate this risk by ensuring that one of the Kafka consumer writes the data in a database. So, even in a rare scenario of cache failure, the user can read the data back from DB eventually.
The status of the order in the cache as written at the time of order creation is "in progress"
One or more kafka consumer groups are then used to handle the events as follows: the payment and notification are handled properly and the final status will be written back to the cache and database
A separate Kafka consumer will then receive the response from the payment and notification apis and write the updates to cache, DB and a web socket
The websocket will then update the UI model and the changes would be then reflected in the UI through event sourcing.
Further clarifications based on comment
The basic idea here is that we build a cache using streaming for every service with data they need. For e.g. the account service needs feedback from the payment and notification services. Therefore, we have these services write their response to some Kafka topic which has some consumers that write the response back to order service's cache
Based on the ACID properties of Kafka (or any similar technology), the message will never be lost. Eventually we will get all or nothing. That's atomicity. If the order service fails to write the order, an error response is sent back to the client in a synchronous way and the user probably retries after some time. If the order service is successful, the response to the other services must flow back to its cache eventually. If one of the services is down for some time, the response will be delayed, but it will be sent eventually when the service resumes
The clients need not poll. The result will be propagated to it through streaming using websocket. The UI page will listen to the websocket As the consumer writes the feedback in the cache, it can also write to the websocket. This will notify the UI. Then if you use something like Angular or ReactJS, the appropriate section of the UI can be refreshed with the value received at the websocket. Until that happens user keeps seeing the status "in progress" as was written to the cache at the time of order creation Even if the user refreshes the page, the same status is retrieved from the cache. If the cache value expires and follows a LRU mechanism, the same value will be fetched from the DB and wriitten back to the cache to serve future requests. Once the feedback from the other services are available, the new result will be streamed using websocket. On page refresh, new status would be available from the cache or DB
You can pass an Identifier back to client once the booking is completed and client can use this identifier to query the status of the subsequent actions if you can connect them on the back end. You can also send a notification back to the Client when other events are completed. You can do long polling or you can do notification.
thanks skjagini. part of my question is to handle a case where other
microservices don't get back in time or never. lets say payment api is
done working and charged the client but didn't notify my order service
in time or after very long time. how my client waits ? if we timeout
the client the backend may have processed it after timeout
In CQRS, you would separate the Commands and Querying. i.e, considering your scenario you can implement all interactions with Queues for interaction. (There are multiple implementations for CQRS with event sourcing, but in simplest form):
Client Sends a request --> Payment API receives the request --> Validates the request (if validation fails throws error back to the user) --> On successful validation --> generates a GUID and writes the message request to Queue --> passes the GUID to the user
Payment API subscribes the payment queue --> After processing the request --> writes to Order queue or any other queues
Order APi subscribes to Order Queue and processes the request.
User has a GUID which can get him data for all the interactions.
If use a pub/sub as in Kafka instead of Kafka (all other subsequent systems can read from the same topic, you don't need to write for each queue)
If any of the services fail to process, once the services are restarted they should be able to pick where they left off, if the services are down in the middle of a transaction as long as they roll back their resp changes you system should be stable condition
I'm not 100% sure what you are asking. But it sounds like you should be using a messaging service. As #Saptarshi Basu mentioned kafka is good. I would really recommend NATS - although I'm biased because that's the one I work with
With NATS you can create request-reply messages to interface between client and booking service. That's a 1-1 communication
If you have multiple instances of each of your services running, you can use the Queuing service to automatically load balance. NATS will just randomly select a server for you
And then you can use pub-sub feeds for communication between all of your services.
This will give you a very resilient and scalable architecture, and NATS makes it all incredibly easy

How does it work to implement an API for Payments in separate ends of a project?

Alright, A friend and I are developing an App where I'm developing the back-end and he is developing the front-end. The project is separated into two repositories the front-end and the back-end, and we need to implement a payment API.
Now, since we're using the REST API Concept, we communicate both ends through JSON data.
My question is, when we're making the connection to the payment API, who needs to execute that request? The front-end or the back-end?
I know it's a silly question, but first timer here.
The backend will obviously process the payment, I'm not sure which payment API you're going to use. But depending on the API you go with, the implementation will vary. But the actual processing of the payment will be processed in the backend for sure.
It completely depends on the API.
In some cases, a payment can be accomplished via a secure web service call, which would be issued by your friend's REST service. The front end will still need to collect data (e.g. payment amount and card number) and may also need to collect additional information to satisfy the API (e.g. IP address or browser signature, for risk management purposes).
In other cases, the payment is sent directly to the service from the browser. The role of your application would be to render an iFrame housing a page that is reached via SSO. The back end may need to call a service to retrieve an SSO token, or may have to compute an SSO token using a shared key.
You should probably refer to the payment API's documentation. They often have very specific guidance which you must follow carefully in order to achieve payment card (PCI-DSS) compliance. There is nothing special about "payments" that says that allows StackOverflow users to guess anything about its API.

Delegating Tasks for Mission Critical Application

I'm working on a mission critical application.
The application fetches Stock Market data from different stock markets like NYSE, NASDAQ, etc. using third party service.
Customers can come to the application and add their Portfolio (which company's shares they have).
And then set Alerts. eg. Notify me when AAPL price goes above $xxx on NASDAQ. when MSFT price goes below $zzz on NYSE.
I've a cron job that fetches market data from third party service for all the tickers users have added (AAPL, GOOG, MSFT, etc...) every 1 min.
After I get the data, I fetch all the alerts that users have created and then send them notification via Email, SMS, Pushover, Twitter, Facebook Message, etc. Also add that notification to app's database so user can see it in App when they log in.
Now since this is time intensive application, failure to fetch data may result in big loss since customers are paying for the time critical data.
Currently, I'm pushing all the notification sending part to Queue. Worker (on my server) sends notification.
Are there any other better ways to delegate as much work as possible to third party servers?
Would you recommend using Iron.io worker so it does the job of sending the notifications as well.
And may be also fetching data from the market.
Thanks!
Architecturally there are a number of approaches but it sounds as if you're making the right choices. Using a queue to decouple the producer from the notification process makes sense. This enables a more proper SOA architecture where you can change/update/evolve various parts of the app independently without worrying too much about tightly coupled code.
That said, your question is specifically around offloading to third parties. There are third parties that can abstract the notification part out of your code. I'm not super familiar with them but there are many options: PubNub, Pusher, Twilio, SendGrid, Mailgun, AWS SNS, etc.
I work for Iron.io. We have many customers doing exactly what you're trying to accomplish: creating workers that become little mini-services and calling them from either push events, scheduled tasks, or on-demand. This frees you up from having to deal with the queuing, routing, scheduling, and worker/background server capacity.
We're happy to help you architect things right from the beginning, just reach out to support#iron.io.

If the website owner steals the payment gateway information then is it safe to use a payment gateway?

I am integrating a payment gateway; this is the first time I am integrating payment gateway functionality to my system, I am using Authorised.Net for the payment gateway.
As I have successfully integrated it but I see the user has to enter the following values to purchase his item, and the transaction id is returned.
//post_values.Add("x_card_num", "4111111111111111");
//post_values.Add("x_card_num", CreditCard);
//post_values.Add("x_exp_date", "0115");
////post_values.Add("x_amount", "19.99");
////post_values.Add("x_amount", );
////post_values.Add("x_description", "Sample Transaction");
//post_values.Add("x_amount", txtAmout.Text);
//post_values.Add("x_description", txtDesc.Text);
////post_values.Add("x_first_name", "John");
////post_values.Add("x_last_name", "Doe");
////post_values.Add("x_address", "1234 Street");
////post_values.Add("x_state", "WA");
////post_values.Add("x_zip", "98004");
//post_values.Add("x_first_name", txtFName.Text);
//post_values.Add("x_last_name", txtFName.Text);
//post_values.Add("x_address", txtAddr.Text);
//post_values.Add("x_state", txtState.Text);
//post_values.Add("x_zip", txtZip.Text);
These values just fix his transaction and purchase of item, so my confusion is that if the web site owner put all this information into his database and made more transactions using his details, then what? Is it safe and secure, or something else happens that I could not figure out?
Here are some basic guidelines to follow:
Keep all information in your database except for the credit number. Never keep the credit number unless you feel that your encryption systems are safe.
Store Authorize.net successful or failed transactions
You need to create a transaction table where you will create a new line for each transaction, regardless of being the same user, or transaction result.
Encrypt some portions of the transaction: Address is good thing to encrypt. This will avoid identity theft if you ever get hacked.
Make sure all user passwords are hashed
Store the Web Site Database connection encrypted as well
Communicate with the database using only stored procedures. This should avoid SQL injection, if the stored procedures are built correctly.
This is how it is, website owner can put all the information into his database, if the transaction mode is present on the website, thats why I and I think most of others either do transactions from trusted sites or sites that redirect to some trusted gateway like paypal or authorize.net for financial transactions.
Thanks for the clarification. I'm currently working on a project that is using PayPal in this same manner. We store only the authorization code and transaction ID in our database.
In my opinion, the 30 seconds or so that the user will save by having their information stored isn't worth the risk associated with storing their information. If you're doing recurring transactions, the vendor will store the information securely for you (at least PayPal does) so there's no real reason to store the credit card information in your system.
[EDIT] As Imran pointed out, storing the last 4 digits of the number would be fine for display on a report.
You can do nothing to prevent the DB owner from misusing the information if they're storing it aside from contacting your credit card company and reporting fraudulent charges. The payment gateway has no idea who is inputting the payment details, other than ensuring the transaction is coming from one of their authorized customers (i.e., the customer authorized to use the payment gateway).

Resources