i am new to servlet programming. i want to know that... is it possible to pass objects between two servlets residing on different application servers??? say two tomcat servers...
means what i want to do is:
[browser]--> [app server 1 performs some operation on data]--> [server 2 does some operation on data]
i am sure it is possible but can anyone tell me how??
Short of server clustering (which you don't want to get into at this point, trust me), the only way to do this is to send a redirect from the first server to the other, encoding the data you want to send on to the URL.
You can't pass the actual object, since the servlets are on different servers, so passing data is the best you'll be able to manage.
If you do fancy playing with Tomcat clustering, then this gives the facility of storing objects in the HTTP session which are replicated across all servers in the cluster. I'd definitely categorise this is as "advanced usage", though, and not something to get into if you're new to this stuff.
If they are on two different servers you might want to 'duplicate' the original HttpServletRequest that has been made to the first server/servlet. You can do that by opening URLConnection to the other server/servlet and copy data from the first one request to its outputStream.
Related
I need to have two orchestrations that take the same input schema message from an HTTP Receive Port.
The orchestrations do different things.
I do not understand how can I call either an orchestration or the other one.
I have just a solution in my mind but I don't think it right.
I create two different receive location. One orchestration -> One receive location..
It looks like the correct solution. But create a receive location mean create a virtual folder in my http site on IIS that contains the BTSHTTPReceive.dll.
So my doubt is: If I have 20 orchestration with same input, should I create 20 virtual folder that contain the DLL?
It looks an horrible solution.
What is the correct way to solve my problem?
Is this a one-way or a two-way receive port/location?
In case of a one-way receive location, just promote properties and use basic content based routing (CBR) using publish/subcribe on your properties.
In case of a two-way receive location: which response will you be giving to your application?
Think of your orchestration as your web service. You need to take in the request and generate one response. How you deal with that request by forwarding it to N number of other orchestrations/applications is up to you, but publish/subscribe is built for this behavior.
This probably could not possibly be a more basic HTTP question, but I am very new to web development and I do not even know the right question to ask (evidenced by the fact that googling has not helped).
What I have: an AWS server with an Elastic Beanstalk environment set up. I have successfully compiled, uploaded, and run a simple "Hello World" program to the environment using Eclipse.
What I want to do: pass the server a number via HTTP request and have the server give me back an HTTP response containing the square of that number. On the back end, I want a simple Java class to do the squaring. (Of course, the goal is to be able to pass more complicated data to the server and have more sophisticated Java code on the back end for processing.)
What I think I need to do: create a Java Servlet to listen for and process the request. I think (hope) the documentation is good enough that I can figure out the HTTPServlet API, but I can't answer a more basic question: how do you pass an HTTP request containing some elementary data, like a number?
Thanks in advance!
You need to either GET, or POST (or PUT) your data. GET provides the data in the URL of the request, and will be displayed in the browser's address bar. POST data is provided as a separate request body.
http://www.w3schools.com/tags/ref_httpmethods.asp
A simple GET would look like this:
http://example.com/server?number=4
You can make a POST using a browser extension such as PostMan:
https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en
Or you can do it from the command line using curl:
curl -X POST http://example.com/server -d'data'
Once the data is more complicated than a few variables, you probably want to use POST rather than GET. Also, you can start to think about what your requests are doing. GETs should only retrieve data from the server. If you modify or create data, then POST (or PUT) requests are the methods to use.
As your server becomes more complex, you probably want to start reading about REST.
http://en.wikipedia.org/wiki/Representational_state_transfer
I am wondering if my current approach makes sense or if there is a better way to do it.
I have multiple situations where I want to create new objects and let the server assign an ID to those objects. Sending a POST request appears to be the most appropriate way to do that.
However since POST is not idempotent the request may get lost and sending it again may create a second object. Also requests being lost might be quite common since the API is often accessed through mobile networks.
As a result I decided to split the whole thing into a two-step process:
First sending a POST request to create a new object which returns the URI of the new object in the Location header.
Secondly performing an idempotent PUT request to the supplied Location to populate the new object with data. If a new object is not populated within 24 hours the server may delete it through some kind of batch job.
Does that sound reasonable or is there a better approach?
The only advantage of POST-creation over PUT-creation is the server generation of IDs.
I don't think it worths the lack of idempotency (and then the need for removing duplicates or empty objets).
Instead, I would use a PUT with a UUID in the URL. Owing to UUID generators you are nearly sure that the ID you generate client-side will be unique server-side.
well it all depends, to start with you should talk more about URIs, resources and representations and not be concerned about objects.
The POST Method is designed for non-idempotent requests, or requests with side affects, but it can be used for idempotent requests.
on POST of form data to /some_collection/
normalize the natural key of your data (Eg. "lowercase" the Title field for a blog post)
calculate a suitable hash value (Eg. simplest case is your normalized field value)
lookup resource by hash value
if none then
generate a server identity, create resource
Respond => "201 Created", "Location": "/some_collection/<new_id>"
if found but no updates should be carried out due to app logic
Respond => 302 Found/Moved Temporarily or 303 See Other
(client will need to GET that resource which might include fields required for updates, like version_numbers)
if found but updates may occur
Respond => 307 Moved Temporarily, Location: /some_collection/<id>
(like a 302, but the client should use original http method and might do automatically)
A suitable hash function might be as simple as some concatenated fields, or for large fields or values a truncated md5 function could be used. See [hash function] for more details2.
I've assumed you:
need a different identity value than a hash value
data fields used
for identity can't be changed
Your method of generating ids at the server, in the application, in a dedicated request-response, is a very good one! Uniqueness is very important, but clients, like suitors, are going to keep repeating the request until they succeed, or until they get a failure they're willing to accept (unlikely). So you need to get uniqueness from somewhere, and you only have two options. Either the client, with a GUID as Aurélien suggests, or the server, as you suggest. I happen to like the server option. Seed columns in relational DBs are a readily available source of uniqueness with zero risk of collisions. Round 2000, I read an article advocating this solution called something like "Simple Reliable Messaging with HTTP", so this is an established approach to a real problem.
Reading REST stuff, you could be forgiven for thinking a bunch of teenagers had just inherited Elvis's mansion. They're excitedly discussing how to rearrange the furniture, and they're hysterical at the idea they might need to bring something from home. The use of POST is recommended because its there, without ever broaching the problems with non-idempotent requests.
In practice, you will likely want to make sure all unsafe requests to your api are idempotent, with the necessary exception of identity generation requests, which as you point out don't matter. Generating identities is cheap and unused ones are easily discarded. As a nod to REST, remember to get your new identity with a POST, so it's not cached and repeated all over the place.
Regarding the sterile debate about what idempotent means, I say it needs to be everything. Successive requests should generate no additional effects, and should receive the same response as the first processed request. To implement this, you will want to store all server responses so they can be replayed, and your ids will be identifying actions, not just resources. You'll be kicked out of Elvis's mansion, but you'll have a bombproof api.
But now you have two requests that can be lost? And the POST can still be repeated, creating another resource instance. Don't over-think stuff. Just have the batch process look for dupes. Possibly have some "access" count statistics on your resources to see which of the dupe candidates was the result of an abandoned post.
Another approach: screen incoming POST's against some log to see whether it is a repeat. Should be easy to find: if the body content of a request is the same as that of a request just x time ago, consider it a repeat. And you could check extra parameters like the originating IP, same authentication, ...
No matter what HTTP method you use, it is theoretically impossible to make an idempotent request without generating the unique identifier client-side, temporarily (as part of some request checking system) or as the permanent server id. An HTTP request being lost will not create a duplicate, though there is a concern that the request could succeed getting to the server but the response does not make it back to the client.
If the end client can easily delete duplicates and they don't cause inherent data conflicts it is probably not a big enough deal to develop an ad-hoc duplication prevention system. Use POST for the request and send the client back a 201 status in the HTTP header and the server-generated unique id in the body of the response. If you have data that shows duplications are a frequent occurrence or any duplicate causes significant problems, I would use PUT and create the unique id client-side. Use the client created id as the database id - there is no advantage to creating an additional unique id on the server.
I think you could also collapse creation and update request into only one request (upsert). In order to create a new resource, client POST a “factory” resource, located for example at /factory-url-name. And then the server returns the URI for the new resource.
Why don't you use a request Id on your originating point (your originating point should do two things, send a GET request on request_id=2 to see if it's request has been applied - like a response with person created and created as part of request_id=2
This will ensure your originating system knows what was the last request that was executed as the request id is stored in db.
Second thing, if your originating point finds that last request was still at 1 not yet 2, then it may try again with 3, to make sure if by any chance just the GET response has gotten lost but the request 2 was created in the db.
You can introduce number of tries for your GET request and time to wait before firing again a GET etc kinds of system.
How can one develop a cluster-aware servlet and what is the design criteria for the same?
This isn't a problem which is to be solved at code level, but rather at webserver level. So the Servlet code doesn't need to be aware of being clustered.
The code does not need to be aware of being clustered but the developer needs to be aware that the code may be clustered and the session replicated. Let me explain.
When you mark an webapp in web.xml you are telling the container that that this web-application can be clustered.
If the webapp is deployed on a cluster, each machine in the cluster will run a vm and this webapp inside it. As far as the client is concerned the request it sees one webapp though each request from the client can be serviced by a different vm in the cluster.
So if the webapp is storing any state, it must be made available to all the instances of the vms(in the cluster) running the webapp.
How can this be done ?
By marking the things that you put into the httpsession object as "Serializable". You are signaling to the container that it should replicate the state to the other vms (if you have setup session replication). It is accomplished in a couple of ways in weblogic. Everytime you use setAttribute() on the session, it triggers a sessionreplication event.
In WL There are two ways of replicating inmemory replication and using database for
replication . I would like to hear how this is done in other appservers.
As #BalusC said, this is primarily a server configuration task, and how to do it depends very much on which server you're using (and which you don't mention), but here's how to do it with Tomcat 6, for example.
There is one thing to keep in mind at the code side, though, which is that you have to be careful what objects you put into the HTTP session (using HttpSession.setAttribute(). For session replication to work, these objects have to be serializable in order to be transported across the network to the other servers in the cluster. If they are not serializable, then either the server may drop them, or it may throw an exception.
It's not uncommon for developers to use the HTTP session as a place to put large, complex business objects (to allow them to be accessed from JSPs, for example), and these things are very unlikely to be serializable. Other examples for form-binding objects which, while being simple form-data holders, are often not serializable.
We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!