I understand that a servlet is an actual class in Java that basically extends and provides useful characteristics of a web server, but are there any alternative equivalents where the same similar key characteristics of a servlet are accomplished in a different language?
Particularly something that has:
the servlet life cycle of init(), service() and remove()
limitations on disk access (persistence)
performing page layout function
can be dynamically reloaded
can call other servlets
can operate like a URL
similar security
I have found servlets in AJAX, Spring and the like, but I mean something that is not in the java family at all. To specify further, I found the following but they don't seem to be used very much, both run over Java, and Armed Bear C-Lisp seems still employs a java virtual machine.
node-servlet
armed bear common lisp
There are quite a number of servlet alternative since servlet is just an implementation of HTTP request and response mechanism based on Java.
Almost all languages has a mechanism to deal with HTTP methods. They just not called 'servlet'. For example node.js has great tools for this. Have a look at express.js which is arguable most used node.js framework.
Make a google search like this: "x rest server" and use whatever language you want instead of 'x', you will most likely get what you asked for. Let's try it for a couple of languages:
ruby rest server : sinatra, grape
swift rest server : kitura, perfect, vapor
c# rest server : web api, servicestack
javascript rest server : node.js, express.js, hapi.js
phyton rest server : django, flash
Related
In an application .Net XCC being used to make communication with marklogic module database to execute module, function and adhoc queries etc.
I want to replace the same XCC calls with REST calls so that we can run application in marklogic 9 as .Net XCC has been deprecated in Marklogic 9.
I have tried in built rest api in marklogic. It only allows to execute module exiting in module database.
Is there any online source stuffs available or anything that could help us.
Any help would be appreciated.
Thanks,
ArvindKr
There is /v1/invoke to invoke modules in the modules database attached to the REST app-server you are addressing, but also /v1/eval that allows running ad hoc queries.
HTH!
If you're going to replace XCC.NET with RESTful calls, try out XQRS, it allows you to build services in XQuery in a manner similar to JAX-RS for Java.
I only consider the following for cases such as yours, where compatibility with legacy code is useful or required and where other options are exausted. This is not an elegant approach, but it may be useful in special cases.
The XDBC protocol (which is what XCC uses) is supported natively on the exactly same app servers and ports which the REST API is exposed. You can see this on port 8000 in a default install. The server literally cannot tell a 'REST Application' and an 'XCC Application' apart except by the URI requested in the request (and in some cases additional headers like cookies). REST and XDBC are both HTTP based, and at the HTTP layer are very similar to the extent that they can share the same ports and configurations.
XDBC is 'passed through' the REST processing via the XML Rewriter. XDBC uses /eval and /invoke while REST uses /v1/eval and /vi/invoke. If you look at the default rewriter.xml for port 8000 you can see how the routing is made. While the XDBC protocol is not formally published its not difficult to 'reverse engineer' by looking at the XCC code (public java source) and the rewriter. For example its not difficult to construct URL and payload data to do a basic eval or invoke call. You should be able to replicate existing XCC.NET client behaviour exactly by using the /eval and /invoke endpoints (look for the xdbc attribute set in the rewriter.xml, this causes the request handling to use pure XDBC protocol and behaviour.
Another alternative, if you cannot solve the external variables problem is to write new 'REST Friendly' apis that then xdmp:invoke() on the legacy APIS passing in the appropriate namespaces. An option is to put the legacy code in an entirely seperate modules DB and then replicate the module URIs exactly with the new code. If you don't need to maintain co-existing versions then you modify the old code to remove the namespaces from the parameters or assign local variable aliases.
I'm reading the OWIN 1.0 spec at http://owin.org/spec/owin-1.0.0.html and just can't wrap my head around how it works. I've downloaded Katana source, but that's huge and didn't help any. I'm familiar with the somewhat standard way of having a project/assembly with interfaces only, which allows to integrate two projects without direct regencies. But I can't understand how the web server will call into the web app with only Func<> and Action<> definitions.
OWIN boils down to two things:
1) an "environment" dictionary
2) a method that processes requests and sends responses.
For #1, this is just a property bag that gives you access to the request headers, request stream, response headers, response stream and server data. Think of this as your HttpContext for ASP.NET or HttpListenerContext for System.Net.HttpListener. In fact, in more recent builds of Katana (https://katanaproject.codeplex.com/, which is an open source implementation from the ASP.NET team, there have been improvements (more to come) to simplify this down to an easier to use object model, including an OwinRequest, OwinResponse, and IOwinContext.
For #2, this is often called the "AppFunc" and the signature is:
using AppFunc = Func<IDictionary<string, object>, Task>;
This signature is used for "Middleware" that is in a pipeline of request handlers or it can be the end application which is generating HTML, is a WebAPI, etc.
In Katana, there is a class you can inherit from that simplifies this signature to consume the IOwinContext I mentioned previously. Take at look at OwinMiddlware
You can also read this article which gives an overview of the Katana/OWIN effort: http://www.asp.net/aspnet/overview/owin-and-katana/an-overview-of-project-katana
OWIN just defines how the web server and web application will talk to each other. Your application must implement one side of this contact, the other side which connects to the web server must be provided by installing a NuGet package specific to the web server. There is one for IIS, one for self hosting (stand alone application) etc.
I would like to pass an arrayList of objects to a servlet from a java program.
Can some one please tell me, how this can be done.
Look at this link they describe the process ind detail
http://www2.sys-con.com/ITSG/virtualcd/java/archives/0309/darby/index.html
Please note that if you are going to serialize objects back and forth that the compiled version must be in sync on both the client and the server or you will get errors. I would recommend converting your objects to either XML or JSON and then reading them from that on the server side. That way if you client and server code get out of sync it will still work.
For the client I would recommend Apache's HttpClient (or whatever they have renamed it to)
Have you considered using a web service framework for this instead of coding a naked servlet? The whole business might be about 10 lines of code using, for example, an Apache CXF JAX-RS service and client. If the objects are complex, you might want to use a full SOAP service.
I used to use soap webservices for transferring chart data to my flex app, but recently switched over to using BlazeDS because of performance, convenient typing, etc.
I'm considering switching over to using JSON (as I do in other parts of the app) for these reasons:
Proliferation of DTOs for communicating with flex.* (With JSON, I just use JsonConfig to exclude properties as desired.)
Difficult to debug (whereas JSON is good ol' plaintext).
Problems with load balancing without sticky sessions.
Anyone else run into these problems with BlazeDS? Is BlazeDS worth the hassle?
* I could use the Externalizable interface instead of distinct DTOs, but it's also a pain.
I wouldn't give up on using remoting. Performance of remoting will be much better than JSON. Remember ActionScript doesn't have a method to decode JSON, so you'd need to use an AS library which will be slower than anything built into the player. You'd be better of using XML than JSON.
You should be able to exclude specific properties as desired by marking them as transient. ActionScript has [Transient] metadata and the idea came from Java. The C# library we use for remoting has Transient support. I'm sure BlazeDS does too.
Debugging is easy with the right tools. You should get Charles. It provides very nice views of AMF request and response messages (assuming you're using HTTP and not RTMP, I don't know about RTMP debugging).
http://www.charlesproxy.com/
You also seem to be choosing between BlazeDS and anything-not-remoting. You have more options. BlazeDS is just one remoting implementation that Adobe made available. They also have a commercial one. There are also many open-source remoting projects available. We use a wonderful one for C# called Fluorine. Open-source Java options are Red5 and OpenAMF, but I think there are others as well.
http://red5.org/
http://openamf.com/
There's also a distinction between RTMP and HTTP remoting. You can get data into Flex through either of these protocols and each will have it's advantages/disadvantages. I personally prefer HTTP remoting unless you absolutely need the functionality RTMP provides (push, streaming). HTTP will be easier to debug and should not have problems with a load balancer--it's just HTTP calls where the content happens to be binary.
I've been a longtime ASP.NET developer in the web forms model, and am using a new project as an opportunity to get my feet wet with ASP.NET MVC.
The application will need an API so that a group of other apps can communicate with it. I've always built API's out just using a standard web service prior to this.
As a sidenote, I'm a little hesitant to plunge headfirst into the REST style of creating API's, for this particular instance at least. This application will likely need a concept of API versioning, and I think that the REST approach, where the API is essentially scattered across all the controllers of the site, is a little cumbersome in that regard. (But I'm not completely opposed to it if there is a good answer to the potential versioning potential requirement.)
So, what say ye, Stack Overflow denizens?
I'd agree with Kilhoffer. Try using a "Facade" wrapper class that inherits from an "IFacade". In your Facade class put your code to consume your web service. In this way your controllers will simply make calls to the Facade. The plus side of this being that you can swap a "DummyFacade" that implements the same IFacade interface in that doesn't actually talk to the web service and just returns static content. Lets you actually do some unit testing without hitting the service. Basically the same idea as the Repository pattern.
I would still recommend a service layer that can serve client side consumers or server side consumers. Possibly even returning data in a variety of formats, depending on the consuming caller.