JADE works below or over HTTP? - http

I was working on Java Agent Development Framework, which is the language of creating mobile agents. I was wondering that the code that I will write in JADE, will work over HTTP or below the HTTP? As I am opaque to the inside working and execution of JADE I couldn't get the answer directly...Thanks in advance :-)

JADE (or more generally FIPA standard) introduces the concept of platform consisting of one or more containers on which agents live. Each container is made up by a separate JVM. JADE distinguishes between two types of communication, depending on where the talking agents live:
intra-platform communication, when messages are exchanged between agents living on different containers of the same plaform
inter-platform communication, when messages are exchanged between agents living in different platforms
Depending on where the talking agents live, different protocol will be used.
For intra-platform communication one of the following transport protocols will be used:
RMI (default), going directly over TCP/IP
proprietary protocol based on TCP sockets (used in J2ME environment in JADE LEAP platform)
For inter-platform communication one of the following transport protocols will be used:
IIOP (Sun or ORBacus implementation)
HTTP and HTTPS
JMS
Jabber XMPP
Since the question is specific to JADE platform, I strongly encourage you to use JADE mailing list: http://jade.tilab.com/newuser.php

Related

Using RabbitMQ over HTTP

I have to connect an old but critical software to RabbitMQ. The software doesn't support AMQP, but it can do HTTP Requests.
Does RabbitMQ support plain HTTP? Or should I use a "proxy" or "app" that actively transforms the HTTP Requests to AMQP 1.0 and pushes it to the RabbitMQ server?
https://www.rabbitmq.com/management.html
The management plugin supports a simple HTTP API to send and receive messages. This is primarily intended for diagnostic purposes but can be used for low volume messaging without reliable delivery.
As mentioned, it's designed for very low loads, but it may be usable. If you need higher loads, then by all means cast around for a library that does the job and create a proxy. Most languages will have something. I've personally created a lightweight API using Lumen and https://github.com/bschmitt/laravel-amqp to tie a few disparate services together in the past, and it seems to work very well.
It is possible not but really recommended depending on load. You have three options really, two of which are web socket based and one that seems like what you're looking for. I'd suggest starting with the rabbitmq docs.

How is the IP Application layer implemented?

I am struggling to describe simply what the IP Application layer actually “is”. Some of my terminology may be “not quite right”, so I am happy to be corrected on my terminology as well. Is the application layer a “thing”? Or is it more “conceptual” than that? Is the Application layer on a particular node solely that collection of protocols (such as FTP, POP3, HTTP) that an application needs to adhere to if it is to successfully communicate with another remote application?
Conversely, if I write a program that does not strictly conform to an application protocol (say FTP) then there is nothing in the local “application layer” that will “stop” my attempt to communicate to the remote end using an invalid command? Whereas, if the remote end does conform to the application protocol, then it will reject or ignore my invalid communication attempt?
Or again, I could define my own protocol (not necessary as I guess all scenarios are already covered), and as long as my protocol was “robust and consistent” I could use it to communicate between two remote applications and everything would work? Or, is the Application layer a “thing” separate from the protocols?
The application layer is everything that's inside the application - that is, not within the IP stack (which covers network and transport layers).
If you want your application to communicate with an FTP server it needs to be able to speak FTP in order to work. If you write your own client-server protocol you can do whatever you want.
The network layers are there to make your life easier and to make applications compatible with each other. You can use or not use whatever you seem fit.
“An application needs to communicate with another application on a remote system. Any communication can be thought of as the exchange of messages and data between two partners. There are rules (protocols) that define for an application: what commands are available, the sequence that they can be exchanged between partners and any limitations on the data exchanged. For example: communication with an HTTP server uses commands: GET, POST and PUT (and others), whereas communication with an SMTP (mail) server uses: MAIL, RCPT and DATA, and similarly for other protocols. There are different protocols depending on the nature of the communication e.g. text message transfer, file transfer or video streaming. Any commands that are sent by either partner that do not obey those protocol rules is either: not delivered to, ignored or rejected by the remote application. Nothing prohibits application partners using their “own” communication protocol – for example Skype developed an uses its own protocol. However, there are standard protocols that handle most conceivable communication needs, so developing a new protocol may be unnecessary. If an application uses a “private” protocol its communication will be limited to remote applications that support the same private protocol. In addition to protocols, in a practical, real-world, system there are programming libraries (Programming interfaces) that enable the end-user application to request and participate in this communication. It is the end-user application’s responsibility to interact (send and receive data and parameters) with the (local programming) interface and the comms system’s responsibility to process those requests and deliver the data. In simplistic terms, one data transmission starts at one application and ends at another. However, IP provides other applications, such as DNS, that are used to set up connections, but the operation of these is not visible at the end-user application level. In summary, the collection of protocols and the programming interfaces available and used by an end-user application is the application layer on a system.”

Enterprise Integration Patterns and HTTP (SOAP/REST)

Hi went through Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf.
http://www.eaipatterns.com/toc.html
I also went through Camel and Mule's compliance with these integration patterns -
http://www.mulesoft.org/documentation/display/current/Understanding+Enterprise+Integration+Patterns+Using+Mule
http://camel.apache.org/enterprise-integration-patterns.html
I see that both Mule and Camel allow applications to be deployed and accessed via webservices like SOAP or REST, SOAP being more RPC style. They allow massive integration support using opensource utilities like CXF and Jersey. In fact Mule also supports RMI endpoints - which will give remote method invokation capability as well which is a well-accepted form of Integration.
I understands ESBs are built around a Message Bus with additional support for other protocols however ESBs only comply to EIP and EIP is not just ESBs.
Question is why SOAP/REST or their transport protocol not considered as "Integration Styles" and which is Enterprise Integration so "Message Oriented"?
I am a novice compared to the great minds which designed these patterns but trying to understand the lopsided message-y nature of Integration patterns. I admit it isn't quite the QnA format of stack overflow but will request Mods to keep it alive for a while so that people can share their opinions.
As for SOAP, it would put it under the Integration Style "Remote Procedure Invocation", since it's pretty much what SOAP implements in reality (I won't consider the SOAP over JMS hybrids here with a potential to mix RPC with Messaging..).
REST is, interface wise, very different from SOAP in that it's resource driven instead of service driven. I would non the less group it under the "RPC" style since it's just another format of syncrhonous RPC calls.
I would, however, not put too much effort in theory of what EIP integration style a specific message pattern implements.
Look at a specific scenario at hands instead and use the EIP to model your specific integration.
I've seen integrations of file transfers that in realtiy implemented RPC patterns or SOAP services that in reality implemented messaging (although I don't really recommend do this).
A concrete example: consider the usage of a dedicated file upload service, which happends to be built using SOAP technolgy, which uploads a CSV file to a file area on a server, from where it's picked up by some other system. I would call this file based integration on a high level.
Another example is that Messaging systems sometimes are implemented using a shared database. Still the integration style using them is messaging, not "Shared Database".
Think about how your integration should work on a high level, then apply the various protocols to do the grunt work.

networking technologies other than sockets

I wonder if there exist any other technologies used to establish internet connection between applications. Are there any other? I am searching and so far I haven't found anything else described.
There are many abstractions on top of sockets, if you don't want to deal directly with a socket API. UDP, TCP/IP, various RPC protocols, HTTP (which is on top of TCP/IP), etc. Many programming languages have easy methods of doing, say, an HTTP request and getting the resulting document. You can use that to allow applications to talk to each other over the internet without using a socket API.
What are you trying to accomplish?
If you want to skip sockets you basically have to implement your own means of talking to the network card hardware and telling it to communicate with other devices. A socket is just the abstraction chosen for *nix and Windows machines.

Why do we use HTTP and not remote invocations?

Hey,
first of all this is a conceptional question and I do not know if StackOverflow is the appropriate place - so my apologies if I am wrong.
Nowadays the web is not only used for passing raw informations. Many and especially complex web applications are in use. These web application seem to be so complex that it seems irrational to use the HTTP protocol, which is based on so simple data exchange, plus it is stateless.
Would it not be more convincing to use remote invocations for this web applications? The big advantage to my mind is a unified GUI by using HTML. But there are applications, which have no need for a graphical interfaces and then it comes to a point where the HTTP protocol is really cumbersome.
Short answer: HTTP is allowed through firewalls where other protocols would be blocked.
A short partial answer is: first, for historical reasons - HTTP was used since the dawn of the web as protocol for requesting documents, and has since been used for some different purposes. One reason to keep using it is that it is generally served on port 80 which you can be sure won't be blocked by firewalls between your client and the server. The statelessness of the protocol may not always be what you want, but it has at least the advantage of protecting the server side from very trivial overloading problems.
OS independence
firewall passing
the web server is already a well understood and mostly "solved" problem in terms of load balancing, server fall over, etc.
don't have to reinvent the wheel
Other protocols are being used more and more now, including remote invocations and (the one I am particularly familiar with) WCF (which allows binary TCP/IP data transfer).
This allows data to travel faster for applications which require more bandwidth. For example an n-tier application may use WCF binary transfer between application and presentation tiers. Also public web services allow multiple protocols, including binary.
For data transfer protocols, firewalls should be configured (ie. expose a port specifically for your application), not worked around, I would not recommend using a protocol because firewalls do not block it.
The protocol used really depends on who will consume it and what control you have over the consumption - eg external third parties may need a plain-text version with a commonly agrreed data interface. On the other hand, two tiers in a single web application may be able to utilise binary data transfer for performance and security.

Resources