I want to implement a Pusher client in Qt, and I was wondering what the difference is between the client API and the server API.
As they are telling on their site, clients are consumers and servers are producers (including authentication verifications).
Is it correct that a client cannot publish events, and that we always need a server to handle event distribution and authentication?
Is it correct that a client cannot publish events, and that we always need a server to handle event distribution and authentication?
Clients can publish events, but only on authenticated channels: http://pusher.com/docs/client_events
Server libraries tend to have the functionality you mention. But it is possible to have all functionality in a single library. However, this way of dividing functionality fits well with enforcing security good practice and generally with where apps want to use it.
The iOS library (libPusher) provides more than the standard client functionality e.g. you can make calls to Pusher's web API.
Related
Imagine the following situation. I have an API and a developer builds an application that retrieves new content from it on a daily base. She stores this content and provides this data to all the instances of an app she developed. In this way these apps do not have to call the API directly.
Is there a way to prevent this and force the apps (and therefore the end users) to use the API and not only the application on the server.
I found many questions about how to cache API data but not how to prevent that. I am fairly new to this, so maybe I am overlooking something or maybe it is not possible to prevent this.
Thank you in advance!
Assuming you are using Apigee for API-management, you have some options. First, consider the options available to you contractually, if this is that sort of business relationship and you can impose certain API behavior with a business partner through a contract.
Separate from the legal side of things, we remember that you control your API and the credentials you issue for use by your API clients. You cannot though control, practically, what a client developer does with the credentials you issue: she could promise to embed the credentials in the mobile apps' API client, but change her mind and use it centrally, and then design her mobile client to call into her central cache. If though you really insist that only mobile app clients should be calling your API and not a hub/cache server, then you could consider applying constraint policies on your API (within the Apigee proxy, such as Access Control). For instance, you could blacklist your partner's hub/cache server IP address, although that is weak security at best. Or, you could apply a constraint that only clients with certain identifying User-Agent strings (mobile OS, client) are allowed to connect to your API. Or use GeoIP filtering to allow only clients from certain regions, if that applies to your use-case.
Finally, depending on the data model, you might be able to rate-limit such that a bulk cache becomes impractical: if your edge-client use-cases is to fetch a single record, but a cache would have to hold thousands of records, then you could impose a per-client rate limit (Quota policy) which is no bother to individual mobile clients, but makes the work of a hub/cache server untenable.
We currently designing a mobile app and the client has requested we have the ability to work offline and sync data when online again. I'm looking at using realm as it seems to make having an offline state really easy, however I'm a bit confused about where any server side logic would live. Am I right in that realm isn't really designed to have server side logic? You are just persisting data to the cloud when the cloud is available but you aren't actually in charge of building an API with and logic behind it?
Edit.
Reading further maybe Azure offline data sync is a better option because you can write the server side code? Am I correct in this decision that if I want server side code I can't use realm?
Thansk Michael
Realm Mobile Platform is designed for offline data access since it uses the full power of Realm Mobile Database as the client data store. However, that doesn't mean you are limited to only client-side interactions. We offer a Node.js SDK in the Professional and Enterprise editions where you can work with the same copy of Realm data from the mobile clients in a Linux environment.
The Node.js SDK offers the ability to open any Realm, query or perform write transactions on the server which will push data out to the client(s). In addition, it has event-handling capabilities, where you can register callbacks to perform logic in response to data changes performed on client devices.
To make it easier to get started with the event-handling functionality, we launched Realm Functions which allows you to create Javascript functions through the Realm Object Server dashboard, which will then run in response to data changes. Underneath the hood this uses the Node.js SDK to power it.
The sum total of all of this is that you should be able to build any server-side business logic that you need. By using Realm's sync as the transport layer, your mobile development can focus on the application logic versus networking and data transformation. Likewise, your server has an exact copy of the data to perform logic as well. Both sides operate independently, so offline changes will always sync back up!
I'm evaluating various hosting options for ASP.NET Core application.
In the new programming model of ASP.NET you process a request with a set of middlewares (which are mixture of older IHttpModule & IHttpHandler).
You can have a middleware which can be responsible for authentication, handling of static files or compressing the response before sending (just to name some).
Here comes the confusion.
Where to set a border between server and an app in context of responsibility?
Which side should be responsible for compressing the response? With IIS this was handled by the server and configured in web.config. Kestrel doesn't provide this functionality AFAIK, so you need to implement a custom middleware in the app which will handle this for you. Which one is more appropriate?
What about authentication? IIS provides settings for authentication (anonymous, impersonation, forms auth). On the opposite, in ASP.NET Core we can also write an app middleware which can handle this for us.
Ok, SSL is handled by server, because it's below in the protocol layer and app operates on HTTP(S) only.
What responsibilities should server have? What responsibilities should an app have?
The server is responsible for implementing the base HTTP protocol, managing connections, etc.. It may also choose to offer other features (e.g. windows auth), but we recommend against it unless it can provide a distinct advantage over a middleware implementation. E.g. Windows auth could be implemented in middleware, but it would be much more difficult due to some of the connection management constraints. Compression could be implemented in middleware just as easily as in the server.
As stated on wikipedia:
"The primary function of a web server is to store, process and deliver
web pages to clients"
The thing is that all famous http servers (nginx, apache, IIS, ...) come with a lot of modules that can handle lots of different tasks including the ones you mentioned in your question (authentication, compression, ...).
It's quite likely that the more modules you'll add the slowest your http server will be. IIS for instance is, by far, not known to be the fastest http server around, but if you remove all the modules and use it just for serving resources, then it will become really fast because this what it has been built for back in the days!
The problem of responsibility goes the same with all kind of software application.
Think about databases whose main role is to store data. RDBMS like Oracle or SQL Server are pretty good at it. But as soon as they release a new version, they also release a new functionality that has nothing to do with storing data. And people use it! ;-)
How many times people used their DB as a search engine? I saw people sending mails with SQL Server! But the worse was some guys trying to call webservices within store procedures ;-)
It's always tempting to have one tool to do everything but you need to keep in mind that it has not been built for every purpose. I'd rather use a bunch of lightweight tools that have one single responsibility and that handle it correctly instead.
Now back to your question, I think it's a good approach to make use of middlewares. That way you have control on the entire pipeline and you know exactly what your request have been through. Middlewares are also testable! Getting rid of all the unnecessary modules will definitely lead you to a more lightweight http server.
The righteous "it depends" answer is also acceptable. If you make some tests and realize that gzip compression module is 10x faster than the middleware, go with the module! Don't be dogmatic neither!
I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.
There is an application that should read user tweets for every registered user, process them and store data for future usage.
It can reach Twitter 2 ways: either REST API (poll twitter every x mins), or use its Streaming API to get tweets delivered.
Besides completely different implementations on server side I wonder what are other impacts on server side?
Say application has tousands of users. Is it better to build kind of queue and poll twitter for each user (the simplest scenario), or is it better to use Streaming API and keep HTTP connection open for each user? I'm a bit worried about the latter as it'd require keeping tousands of connections open all the time. Are there any drawbacks of that I'm not aware of? If I'd like to deploy my app on Heroku or on EC2 instance, would it be ok or are there any limits?
How it is done in other apps that constantly need getting data for each user?