I'm making a Plone based social site. I want to add news feeding (like facebook feed, friend feed) into my site. This is when my people shout: "nonbloking io", "Tornado" ...
Basically, it's asynchronous response that I want: when the client call "update", the server will not response immediately, but until there is actually an update.
My friend suggest me to create a Tornado server. The client will send update request to the Tornado server, and Plone server will send the updated content to the Tornado server (using subscription), then the Tornado server response to client.
I personally think that the solution is not only overkill but also make various things (say authentication, authorization) difficult.
Can we achieve the same functionality in Plone with reasonable performance ? Do wsgi, gevent etc. helps ?
Related
I was able to create a website with a domain name behind a ipfs gateway like cloudflare.
Can I listen HTTP post requests ?
A ipfs website like torrent-paradise.ml, seems to send HTTP post requests (ex: /api/search?q=test)
IIUC the website you mentioned uses old school nodejs app for the /api/search endpoint.
Search feature is not provided by the IPFS daemon.
By default IPFS gateway allows only HTTP GET.
One can enable experimental Writable Gateway feature, which then accepts HTTP POST: https://discuss.ipfs.io/t/writeable-http-gateways/210?u=lidel, but it only allows you to import data to IPFS. There is no search feature.
That being said, I believe you should not care about HTTP method used, but ask "how to build dynamic app on IPFS" or "how to do search using immutable data on IPFS" instead.
Some pointers/ideas:
Build a DAG-based index and put it on IPFS, then have your app traverse the graph while looking for answer (fetching only subset of the index, only when needed)
Leverage libp2p's pubsub for real-time features (eg. by running js-ipfs on the page)
Look into CRDTs for decentralized conflict-free data types
While you can do it all by hand and tailor your solution to a specific problem, reusable primitives for the last two are provided by existing projects built on top of IPFS, like OrbitDB.
As far as I understand, both web-feeds RSS and Atom request, starting at the client side, content from the server, and they do that at periodic intervals of time. It doesn't matter whether there is new content or not, the client checks for updates.
Wouldn't it be more efficient the other way round? Let the server announce new updates. In this scenario, it would have to keep track of the clients, and when each got what update. It would also have to send a message to each one. But still, it looks more efficient if client-server were not communicating when there are no new news.
Is there a reason why web-feeds are the way they are?
This model is not inherent to feeds (RSS or Atom), but to HTTP itself, where a client queries a server to get data. This is at this point, the only way in a pure client -> server model to determine whether there is any new data available or updated.
Now, in the context of server querying other servers, PubsubHubbub solves that with webhooks. Basically, when polling any given resource, a server can also "subscribe" by providing a webhook which will be called upon a change or update in the feed. This way the subscriber does not have to poll the feed over and over again.
I have multiple AJAX requests going out of my browser.
My UI is comprised of multiple views and the AJAX requests are trying to populate those views simultaneously. In some cases I require more than 10 simultaneous requests to be sent from client and processed concurrently at the server.
But due to browser limitations on max concurrent requests to a single domain and because of HTTP's "A server MUST send its responses to requests in the same order that the requests were received" constraint, I am not deriving as much concurrency in request processing as I would want.
From my application's standpoint, I dont need responses to come in the order in which I sent the request. I am ok if view8 gets populated before view1, for example.
Async processing using Servlet 3.0 constructs seems to address only one-side of the problem (the Server-side) and hence cannot be fully exploited for maximizing application concurrency.
My question is:
Am I missing out on some proper constructs ? ('proper' in contrast to workarounds like "host your images from a different sub domain") that can yield me more concurrency ?
This seems like something many web UIs would need ! If not, then I am designing my UI the wrong way. In either case, I would appreciate your inputs.
Edit1: To my advantage, I dont have to support a huge number of concurrent clients. The maximum number of concurrent clients accessing the app would be < 100. Given that fact, basically am trying to enhance the experience of these clients when I have the processing power available aplenty on my server-side.
Edit2: Our application/API is not for 'public' consumption. For ex: It is like my company's webmail app. It is hosted on the internet but it is not meant for everyone's consumption. Only meant for consumption by the relevant few.
The reason why am giving that info, is to differentiate my app from SO/Twitter, which seem to differentiate their (REST) API users from their normal website users. In our case, we think we should not differentiate that way and want to provide single-set of REST endpoints for both.
The reason behind the limitation in the spec (RFC2616) seems to be : "These guidelines are intended to improve HTTP response and avoid congestion.". However, intranet web apps have more luxuries and should not have to be so constrained !?
The server is exposing REST API and hence the UI makes specific GETs
for various resource catogories (ex: blogs, videos, news, articles).
Since each resource catogory has its exclusive view it all fits in
nicely. It feels wrong to collate requests to get blogs and videos
together in one request. Isnt it ?
Well, IMHO being pragmatic is more important. Sure, it makes sense for a service to expose RESTful API but it's not always necessary to expose the entire API to the browser. Your API can be separate from your server side web app. You can always make those multiple API requests on the server side, collate the results and send them back to the client. For e.g. look at the SO home page. The StackOverflow API does expose a RESTful API but when loading the home page the browser doesn't send across multiple requests just to populate the tags, thread listing etc.
Thanks Sanjay for the suggestion. But we wanted to have a single-API
for both REST clients and Browser clients. Interestingly, the root URI
"stackoverflow.com" is not mentioned in SO's REST API, but the browser
client uses it. I suppose if they had exposed the root URI, their
response would be difficult to process (as it would be a mixture of
data). Their REST API is granular (as is in my application), but their
javascript code uses some other doors(APIs) to decrease no. of
round-trips to the server! Somehow that doesnt feel right (Am a novice
in this field though). Feel free to correct me
SO doesn't use any "other doors". It's just that they simply don't send across 10 concurrent requests for populating something on the page. They make XHR request when you vote, mark thread as favorite, comment etc. For loading the page itself, there are no multiple requests. If you want to directly hit your RESTful API from the browser, you'll have to honor the limitations. Either that or go the desktop way which allows you virtually unlimited connections to your server but I guess you don't want to go that route...
It is quite easy to update the interface by sending jQuery ajax request and updating with new content. But I need something more specific.
I want to send the response to client without their having requested it and update the content when they have found something new on the server. No need to send an ajax request every time. When the server has new data it sends a response to every client.
Is there any way to do this using HTTP or some specific functionality inside the browser?
Websockets, Comet, HTTP long polling.
It has name server push (you can also find it under name Comet technology). Do search using these keywords and you will find bunch examples, tools and so on. No special protocol is required for that.
Aaah! You are trying to break the principles of the web :) You see if the web was pure MVC (model-view-controller) the 'server' could actually send messages to the client(s) and ask them to update. The issue is that the server could be load balanced and the same request could be sent to different servers. Now if you were to send a message back to the client you'll have to know who all are connected to the server. Let's say the site is quite popular and you have about 100,000 people connecting to it every day. You'll actually have to store the IPs of each of them to know where on the internet they are located and to be able to "push" them a message.
Caveats:
What if they are no longer browsing your website? You see currently there is no way to log out automatically if you close your browser. The server needs to check after a fixed timeout if you have logged out (or you send a new nonce with every response to prevent the server from doing that check)
What about a system restart/crash etc? You'd lose all the IPs that you were keeping track of and you are back to square one - you have people connected to you but until you receive new requests you can't really "send" them data when they may be expecting it as per your model.
Let's take an example of facebook's news feeds or "Most recent" link close to the top right - sometimes while you are browsing your wall you see the number next to most recent has gone up or a new 'feed' has come to the top of your wall post! It's the client sending periodic requests to the server to find out what was updated rather than the other way round
You see, it keeps it simple and restful. You may feel it's inefficient for the client to "poll" the server to pull the data and you'd prefer push, but the design of the server gets simplified :)
I suggest ajax-pulling is the best way to go - you are distributing computation to the client and keeping it simple (KIS principle :)
Of course you can get around it, the question is, is it worth it?
Hope this helps :)
RFC 6202 might be a good read.
Scenario:
localhost receives the current HttpRequest with 3 hidden inputs and a posted file. I must then forward this form data to an external image host and get the response.
See the System.Net.WebClient and related classes. You can use them to create a request to the remote server and handle the response. Also get Fiddler to help you replicate what the browser sends.
I hate doing this. It wastes my server's bandwidth and ties up IIS threads as well as using my server's CPU. It sucks and it's worth avoiding at all cost. Many services like, one that comes to mind is fliqz, provide a mechanism such that the files are uploaded directly from the client to their server (bypassing yours) and then they make a request to your server passing it various info on the query string.