Can I export my Urban Airship push device tokens? - push-notification

I'm evaluating Urban Airship as a push solution and I was wondering if it's possible to export my device tokens should I decide to stop using their service?
I've noticed they have an API endpoint to download device data (http://docs.urbanairship.com/reference/api/v3/device_information.html#device-token-list-api) but I was wondering if anyone actually went through the process of switching their push solution from UA to an internal solution (i.e. run my own push server and ping old users).
Thank you!

I'm not sure if there is an API call for it, but you could go to Audience->device tokens, and make a script to fetch all of them.
In the company I work, we decided on a different approach.
All communication with Urban Airships goes through our own backend, where we at the same time store the devicetokens sent from the device. That way we can shift to another way of sending push notifications without modifying our apps. It is of course a bit more time consuming to do the initial development. On the other hand, if you go for the solution you are currently considering, the switch to you own implementation (or another push provider) will properly require several migrations, or at least maintaining two different ways of sending push notification for a considerable time.
BTW:we have been using UA for almost 3 years, and have been very happy with their service.

Related

Best strategy to develop back end of an app with large userbase, taking into account limitations of bandwidth, concurrent connections etc.?

I am developing an Android app which basically does this: On the landing(home) page it shows a couple of words. These words need to be updated on daily basis. Secondly, there is an 'experiences' tab in which a list of user experiences (around 500) shows up with their profile pic, description,etc.
This basic app is expected to get around 1 million users daily who will open the app daily at least once to see those couple of words. Many may occasionally open up the experiences section.
Thirdly, the app needs to have a push notification feature.
I am planning to purchase a managed wordpress hosting, set up a website, and add a post each day with those couple of words, use the JSON-API to extract those words and display them on app's home page. Similarly for the experiences, I will add each as a wordpress post and extract them from the Wordpress database. The reason I am choosing wordpress is that it has ready made interfaces for data entry which will save my time and effort.
But I am stuck on this: will the wordpress DB be able to handle such large amount of queries ? With such a large userbase and spiky traffic, I suspect I might cross the max. concurrent connections limit.
What's the best strategy in my case ? Should I use WP, or use firebase or any other service ? I need to make sure the scheme is cost effective also.
My app is basically very similar to this one:
https://play.google.com/store/apps/details?id=com.ekaum.ekaum
For push notifications, I am planning to use third party services.
Kindly suggest the best strategy I should go with for designing the back end of this app.
Thanks to everyone out there in advance who are willing to help me in this.
I have never used Wordpress, so I don't know if or how it could handle that load.
You can still use WP for data entry, and write a scheduled function that would use WP's JSON API to copy that data into Firebase.
RTDB-vs-Firestore scalability states that RTDB can handle 200 thousand concurrent connections and Firestore 1 million concurrent connections.
However, if I get it right, your app doesn't need connections to be active (i.e. receive real-time updates). You can get your data once, then close the connection.
For RTDB, Enabling Offline Capabilities on Android states that
On Android, Firebase automatically manages connection state to reduce bandwidth and battery usage. When a client has no active listeners, no pending write or onDisconnect operations, and is not explicitly disconnected by the goOffline method, Firebase closes the connection after 60 seconds of inactivity.
So the connection should close by itself after 1 minute, if you remove your listeners, or you can force close it earlier using goOffline.
For Firestore, I don't know if it happens automatically, but you can do it manually.
In Firebase Pricing you can see that 100K Firestore document reads is $0.06. 1M reads (for the two words) should cost $0.6 plus some network traffic. In RTDB, the cost has to do with data bulk, so it requires some calculations, but it shouldn't be much. I am not familiar with the pricing small details, so you should do some more research.
In the app you mentioned, the experiences don't seem to change very often. You might want to try to build your own caching manually, and add the required versioning info in the daily data.
Edit:
It would possibly be more efficient and less costly if you used Firebase Hosting, instead of RTDB/Firestore directly. See Serve dynamic content and host microservices with Cloud Functions and Manage cache behavior.
In short, you create a HTTP function that reads your database and returns the data you need. You configure hosting to call that function, and configure the cache such that subsequent requests are served the cached result via hosting (without extra function invocations).

Firebase for multiplayer games - verify data that is sent to the database

I'm starting to learn about google's firebase, seems really cool for real time applications. The auto-synced database seems very easy to use and I feel like diving into it.. I plan to start learning by building a simple checkers multiplayer game, but I still have an important question about it..
Firebase auto-syncs between users and devices using their 'magic' Database, which stores data and sends out to 'subscribers' of that db. Now what if I want to have some server processing of this data in between? For example, when a player makes a move, I want something that is not on client-side to make sure that is a valid move.. what would be the architecture to accomplish that?
Having a trusted process that sits between the users is a common scenario when using Firebase. Have a look at our classic blog post Where does Firebase fit in your app?, it would fit closest to pattern 2 there.
Typically you'll want to use the firebase-queue for this. Your users write their "requests" (probably moves in your case) into the queue, the server processes those and updates the actual board.
Another great thing about this is that it's easy to secure. The users can only write to the queue, while the server is the only one that can read the queue and update the board. A lot simpler to capture in security rules than many other approaches.

Use GAE background thread to trigger SSE to multiple web clients

All,
I have completed the basic GAE "Guestbook" example which uses Google Cloud Endpoints and Google Cloud Messaging. I can successfully add a note to the guestbook and have it appear on all registered devices.
I've also used the super simple Server Sent Event (SSE) mechanism to have a web page initiate an event source and then update itself as events are received. But separate web pages appear to create their own distinct event sources (even if using the same URI to the event source) and thus get their own events at their own times.
The objective here is to create a bit of collaboration such that user actions can come from an android device or a web page and the effects the received action are then pushed to all connected users/devices/web pages.
I have assumed I will need a background module and that both Endpoints and 'normal' web pages / queries would channel the received user action to that background module. I believe I can get that far. Next, I need the background module to trigger a push notification to all interested parties.
I believe I can trigger a Google Could Messaging event to registered Android devices from that background module.
But it isn't clear to me how a background module can be the source of an SSE, or how the background module can best communicate with a foreground module that already is the source of an SSE.
I've looked at the Google Queue API, but I have a feeling I'm making something quite easy much more difficult than it needs to be. If you were not going to 'poll' for changes from a web page... and you wanted to receive notifications from an SSE source when changes were made by other users, possibly using Android devices rather than a typical web page, and the deployed application is running on the Google Application Engine, what would you recommend?
Many thanks,
Randy
You are on the right track, not really sure why you are using the background module but from what i understood you need to:
Your front end module receives an update
You retrieve a list of all devices receiving that update
Use the Queue service to send the update via GCM to every single device
Why use queues? because front end instances have a 1 min time limit per request and you'll need to queue work in order to go beyond that time to serve you (potentially) thousands of users.
Now, If you already have a backend instance (which does not have the 1min limit) you could just iterate over the list and send all messages on one request. I believe you have a 24 hr request limit so you should be OK. But in this scenario you don't have need for the front end module, you can just hit this server straight up.

Delegating Tasks for Mission Critical Application

I'm working on a mission critical application.
The application fetches Stock Market data from different stock markets like NYSE, NASDAQ, etc. using third party service.
Customers can come to the application and add their Portfolio (which company's shares they have).
And then set Alerts. eg. Notify me when AAPL price goes above $xxx on NASDAQ. when MSFT price goes below $zzz on NYSE.
I've a cron job that fetches market data from third party service for all the tickers users have added (AAPL, GOOG, MSFT, etc...) every 1 min.
After I get the data, I fetch all the alerts that users have created and then send them notification via Email, SMS, Pushover, Twitter, Facebook Message, etc. Also add that notification to app's database so user can see it in App when they log in.
Now since this is time intensive application, failure to fetch data may result in big loss since customers are paying for the time critical data.
Currently, I'm pushing all the notification sending part to Queue. Worker (on my server) sends notification.
Are there any other better ways to delegate as much work as possible to third party servers?
Would you recommend using Iron.io worker so it does the job of sending the notifications as well.
And may be also fetching data from the market.
Thanks!
Architecturally there are a number of approaches but it sounds as if you're making the right choices. Using a queue to decouple the producer from the notification process makes sense. This enables a more proper SOA architecture where you can change/update/evolve various parts of the app independently without worrying too much about tightly coupled code.
That said, your question is specifically around offloading to third parties. There are third parties that can abstract the notification part out of your code. I'm not super familiar with them but there are many options: PubNub, Pusher, Twilio, SendGrid, Mailgun, AWS SNS, etc.
I work for Iron.io. We have many customers doing exactly what you're trying to accomplish: creating workers that become little mini-services and calling them from either push events, scheduled tasks, or on-demand. This frees you up from having to deal with the queuing, routing, scheduling, and worker/background server capacity.
We're happy to help you architect things right from the beginning, just reach out to support#iron.io.

Architecture For A Real-Time Data Feed And Website

I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.

Resources