What is a Webhook and why should I care? - http

Best I could find was this wiki entry
I I thought "surely there must be more to it than this".
Am I missing something?

From the doc:
What is WebHook?
The concept of a WebHook is simple. A WebHook is an HTTP callback: an
HTTP POST that occurs when something happens; a simple event-notification via HTTP POST.
A web application implementing WebHooks will POST a message to a URL
when certain things happen. When a web application enables users to
register their own URLs, the users can then extend, customize, and
integrate that application with their own custom extensions or even
with other applications around the web. For the user, WebHooks are a
way to receive valuable information when it happens, rather than
continually polling for that data and receiving nothing valuable most
of the time. WebHooks have enormous potential and are limited only by
your imagination! (No, it can't wash the dishes. Yet.)
Why should I care?
As integrated as we perceive the web, most web applications today
operate in silos. With the rise of API's we've seen mashups and some
degree of integration between applications. However, we have not seen
the vision of the programmable web: a web where you as the user can
"pipe" data between apps much like the Unix command line. Some say RSS
is the answer. They are wrong. The heart is in the right place, but
the implementation is wrong. RSS is still useful, but it is not going
to bring us the true programmable web.
We just need a simple way to get data out in real-time to let the user easily do whatever >they wantwith it. That means no polling, no content constraints, and no XML
parsing. That means no RSS. Using HTTP is simpler and easier to use.
PHP is a very popular and accessible programming environment, so it's
likely to be used often for writing hooklets... getting data from a
web POST in PHP is as simple as $_POST['something']. And making the
request to the user script is as simple as making an HTTP request,
something already built-in to most programming environments. In fact,
web hooks are easier to implement than an API.

Related

HTTP POST from GOOGLE ASSISTANT to PRIVATE SERVER and convert response in voice

I want use Google Assistant from my phone to send HTTP POST command to my server. I have a simple webnms app running over it, this server support REST API and now I want to use Google Assistant to shoot GET or POST command to that server and return my output.
Is it something possible? I am not full time developer.
Yes, as #Prisoner says it is possible. It is not what you asked - but have you seen these ways that Google provides to get skills published without requiring a lot of developer savvy?
https://developers.google.com/actions/content-actions/
https://developers.google.com/actions/templates/first-app
I don't speak for them, but IMO Google's target audience for Action building apart from the above is those who have at least some familiarity with the JavaScript language and its "run-time" Node.
There is also this - which I haven't tried by the way.
https://www.techadvisor.co.uk/how-to/digital-home/easy-actions-google-assistant-3665372/
In case it is not obvious, Google Actions are essentially websites that interact with Google's assistant running on a Home device or a smart phone, say. Think of the Assistant as a browser initiating requests and your Action as serving them. If you can (build and?) deploy a server that handles POSTS over HTTPS on a publicly addressable URL, and if you can understand the JSON payload that the Assistant sends and respond with appropriate JSON to carry out you application then you are good to go.
Where you don't have a public IP address - e.g. in testing - you can use a tool like ngrok ( https://ngrok.com/ ) to reverse proxy requests emanating from the Assistant to your server.
I have slides for a presentation I did targeting fledgling developers who had never built an Action here
https://docs.google.com/presentation/d/1lGxmoMDZLFSievf5phoQVmlp85ofWZ2LDjNnH6wx7UY/edit?usp=sharing
and the code that goes with it here
https://github.com/unclewill/parrot
On the upside the code is about as simple as it gets. On the downside it does almost nothing. In particular, it doesn't try to understand language. As #Prisoner says you'll likely need a tool like Dialog Flow for that.
Yes, it is possible.
Your server will need to implement the Actions on Google API. This is a REST API which will accept JSON containing what the user is intending to do and specific information about what they have said. Your server will need to send back JSON indicating the reply, along with additional information about how to continue the conversation.
You will likely also want to use a tool such as Dialogflow to handle building the conversational script and converting a user's phrases into something that makes sense to you. You'll also need to use the Actions on Google console to manage your Action and provide additional details about how users contact your Action. All of this is explained in the Actions on Google documentation.
Simple Actions are fairly easy to develop, and can certainly be done by a developer as a hobby. Good Actions, however, take a lot more thought and planning. Google offers you to the tools - it is up to you to best take advantage of them.
I've found the solution.
In the "Action" console https://console.actions.google.com/project/sandbox-csuite/scenes/Start
Go to menu "Webhook", click "Change fulfillment method", and then select "HTTPS endpoint"

Why do we still use HTTP instead of WebSockets for building Web Applications?

Recently I dived into the topic of WebSockets and built a small application that utilizes them.
Now I wonder why HTTP based API'S are still being used, or rather, why they are still being proposed.
As far as I can see there is nothing I can't do with WS that would be possible via HTTP, but the other way round I gain a lot of improvements.
What would be a real world example of an application that takes more benefits from a HTTP powered backend than from a WS one?
#Julian Reschke made good points. The web is document based, if you want your application to play in the WWW ... it have to comply with the game rules.
Still, you can create WS based SPA applications that comply with those.
Using the HTML5 history API, you can change the URL in shown by the browser without causing navigation. That allows you to have a different URL in your address bar depending on the state of your app, enabling then bookmarking and page history. The plugin "ui-router" for AngularJS plays very well here, changing the URL if you change the state programmatically, and viceversa.
You can make your SPA crawlable.
But still you want to use HTTP for some other things, like getting resources or views and cache them using HTTP cache mechanisms. For example, if you have a big application you want some big views to be downloaded on demand, rather than pack everything in a big main view.
It would be a pain to implement your own caching mechanism for HTML to get views and cache them in the local storage for example. Also, by using traditional HTTP requests, those views can be cached in CDNs and other proxy caches.
Websockets are great to maintain "connected" semantics, send data with little latency and get pushed data from the server at any time. But traditional HTTP request is still better for operations that can benefit from distribution mechanisms, like caching, CDN and load balancing.
About REST API vs WebSocket API (I think your question was actually about this), it is more a convenience than a preference. If your API has a big call rate per connection... a websocket probably makes more sense. If your API gets a low call rate, there is no point in using WebSocket. Remember that a Websocket connection, although it is lightweight, it means that something in the server is being held (ie: connection state), and it may be a waste of resources if the request rate do not justify it.
Bookmarking? Page history? Caching? Visibility to search engines?
HTTP and WebSockets are two Web tools originated to accomplish different tasks.
With HTTP you typically implement the request/response paradigm.
With WebSockets you typically implement an asynchronous real-time messaging paradigm.
There are several applications where you need both the paradigms.
You can also try to use WebSockets for request/response and use HTTP for asynchronous real-time messaging paradigm. While the former makes little sense, the latter is a widespread technique, necessary in all the cases where WebSockets are not working (due to network intermediaries, lack of client support, etc.). If you are interested in thus topic, check out this other answer of mine, which tries to clarify the terminology related to these techniques: Is Comet obsolete now with Server-Sent Events and WebSocket?

Will Google block my access if I use their features without token?

I'm using this link https://www.google.com/reader/api/0/stream/contents/feed/FEEDHERE?output=json&n=20
to fetch feeds using Google's algorithm. As you can see I'm not adding any other parameters, just fetching the returned data in JSON format. My app will be heavily used hopefully and if I send a lot of requests to this link, will Google block my access or something?
Is there anything I can include, like userip, url for my app (so if they have problem to just contact me) or something else?
The most basic answer to your question is that Google will change its Terms of Service whenever it likes, and you've got no say in the matter. So if it's allowed today, it might not be allowed tomorrow, at Google's whim.
On this issue, though, you seem fairly safe. From the Terms of Service (these is the general document, since Reader doesn't seem to have a specific one):
Don’t misuse our Services. For example, don’t interfere with our Services or try to access them using a method other than the interface and the instructions that we provide.
Google provides RSS and Atom. They provide these feeds, so I assume they expect that they'll be used. They don't say that it's a misuse to point someone else at those feeds, so it looks OK for now, but they could add such a clause at any time.
All online services are subject to the terms and conditions of the providers of those services. So, as others have said, they may be ok with your use today, but they can change their mind any time down the line. I doubt including a URL or email or contact info will help anything, because when these services change, they don't notify every user of the service, they just announce the change publicly, and usually they give several month's notice in order to give users a chance to adapt their applications, but this is not standardized or enforced so there is no guarantee. One example would be the fairly recent discontinuance of the Google Finance API (for which no replacement has been announced).
The safest approach would be to design your app such that this feature that uses google's functionality is decoupled as much as possible from the rest of your app, so that, when or if the availability of the service changes (ie it's no longer available at all) you can adapt your app to use some other source for the feeds with minimal impact to the rest of the app. Design for change and plan for the worst.

Architecture For A Real-Time Data Feed And Website

I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.

Reliable HTTP/HTTPS Messaging/Notification Product

I am looking for a product to solve a particular task and I'm having a hard time sorting out which product or type of product can accomplish the task. I am looking for something to handle POSTing notifications via HTTPS.
One of my use cases is a mechanical turk type scenario. A client will request a task be started via an API call. A human will get this request do the task, tell the system the task is complete, and the system will send a HTTPS POST to a subscriber. So it's a long-running async request.
I am looking for something that will take care of making this POST for me. I would like something that is reliable and durable. Of course I can write the POST myself, but all of the other niceties that come with a queuing application would be nice to have (and I don't wish to implement all that myself).
I have been looking at a number of queuing, MOM, and ESB products. From what I can tell the queuing products don't seem to push notify over HTTPS and the MOM and ESB are a little too heavy handed I think. For example I think BizTalk will do what I need but that has a lot of overhead. The one solution I did find was Amazon's Simple Notification Service, but that appears to only send from the amazon.com domain, but I want the messages to be sent from my domain.
Can anyone help identify a product that will help? Maybe I'm just overlooking something, not sure what I am looking for, or have to choose a different way of implementing this.
The answer I was looking for was webhooks. Here's a great presentation on what they are and how they can be utilized.
if you want a serverless way to do webhooks, check out IronWorker's webhook support, here's an article on it: http://blog.iron.io/2012/04/one-webhook-to-rule-them-all-one-url.html
nginx has a messaging module that may be just what you need.

Resources