I am confused between these two nuances:
Is EJB itself middleware, or is there any middleware used in the deployment of EJB?
Same goes for RMI- is RMI itself middleware or is middleware used in RMI?
Define Middleware, if it is in the middle what is it between? I agree with the basic idea of this Wikipedia definition:
Middleware is a computer software that provides services to software
applications beyond those available from the operating system. It can
be described as "software glue".[1] Middleware makes it easier for
software developers to perform communication and input/output, so they
can focus on the specific purpose of their application
So the key idea is you write software and exploit something more sophisticated than the plain operating system. I would not say that middleware is only doing communication and input/output, as I'll explain later.
Now define EJBs. There are two things here: the EJB itself, that is application software that you write; you write an EJB as part of your application development so it is not middleware. But you write to a specification defined by Java EE, and deploy your EJB to an EJB Container provided by an Application Server. The EJB Container and App Server are providing something more sophisticated than the operating system. So the container and server are middleware.
The EJB Container facilities include communications (eg. RMI access and JDBC database access) but also include things such as security and transactions.
EJBs are a component of Java EE, which is a middleware.
RMI is another one, and also another component of Java EE.
You can see that these terms aren't too precise.
I agree with EJP.
Middleware, as it says, is software that provides service for distributed applications, it connects kernel(like server) and user apps.
EJB is a component architecture in server-side and is part of Java EE, it is built on RMI. So both of them are components of middleware.
For me, I liken middleware to the UNIX philosophy per McIlroy: "Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."
The middleware is all about handling the "text streams".
Each program does one thing, does it well. Having said that, it works on its own, but is also written to work together with others. If a program is to work on its own, then in my view, it is asynchronous. For this you need the middleware if it's to work with others.
I think the RPC (RMI) stuff is too tightly coupled, and synchronous, so it fails my definition of middleware. I think the EJB is trying to do far too much than handle "text streams".
There's obviously more to this topic. Try Middleware but it all gets too complicated for me, probably because people are trying to define middleware as stuff that allows programs from different vendors to talk to each other. Now you get into rivalry and competition, "standards" and ISO stuff.
Related
This question is purely about semantical convention. I came onto a project where the architect named the API layer (.NET Core API) solution "middleware."
I have always referred to these projects as the API, e.g. MyMagicCompanyAPI.
To me, Middleware is usually the part of the code that intercepts http requests and does something before the request info is passed down the pipeline, e.g. .NET Core Middleware or the Angular Interceptor.
On that note, is it wrong to call an API middleware? If not, is it preferable/more accurate to just call it an API over calling it middelware?
Can a .NET Core API be Called “Middleware”?
Short answer: YES
Depending on the context in which it is being used.
.NET Core's, Angular's, (et al.) use of the term within their architecture is not the only contextual use of the term middleware.
As you said, semantics. Or better yet, it is a matter of context.
A whole API can be a middleware in a distributed system.
In distributed applications
The term is most commonly used for software that enables communication and management of data in distributed applications.
Other examples
The term middleware is used in other contexts as well. Middleware is sometimes used in a similar sense to a software driver, an abstraction layer that hides detail about hardware devices or other software from an application.
Reference https://en.wikipedia.org/wiki/Middleware
On that note, is it wrong to call an API middleware? If not, is it preferable/more accurate to just call it an API over calling it middelware?
That would be a matter of preference/opinion of the maintainer(s) of the said system as a whole.
I am trying to adopt Pact. I understand the consumer side of the equation and it looks very nice. But I am confused about the producer side.
It seems the documentation advocate running the provider app, and verifying the contracts against a running server.
I prefer not to do it. First, I need to curate a database with proper information for each pact, which is painful to say the least. Second, starting up the application is going to be a hassle - did I mention it is a monolith? -, finally, there are POSTS which are going to mutate the state of the database, and make test running brittle.
What I want to do is to do a mockMvc style testing with the pacts. I would like to mock my services, and just test the endpoint, which I think what should be tested in this case.
How can I achieve this with Pact?
Well if you don't test your contracts against your Provider that loses the whole point of Contract testing, since your contracts aren't tested against both sides. Because the main point is that Consumers dictate how the Provider should behave and in your case you would like to bypass the provider with a mockMvc and there is no point for doing contract testing only against your Consumer not the Provider. Even though your Provider is a monolith it's still better to run it and test with a contract, then to run all the microservices for end-to-end testing.
Yes you can achieve it with PACT, however I have the same opinion with Cotnic that it beats the purpose of having PACT on provider side. The main purpose of PACT is to verify that your server as the provider is working as according to the agreement (PACT). Therefore in my opinion the proper way to use PACT as a CONTRACT is by running it against a fully deployed server, and using #State to "Prepare" the Server (db, startup applications, etc)
Anyway, if you are using Spring, you probably can have a look at this sample for using Pact with MockMvc
https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-provider-spring
Pact-JVM now supports Spring mockmvc tests to verify a Spring or Springboot provider. See https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-provider-spring
We need to expose some services (i.e. AddressValidatorService, CustomerFinderService) that currently reside in an ASP.NET application to other applications within our organization. Exposing these services via WCF seems like a natural fit, but I don't see any best-practices for how to pull these common services into a WCF wrapper in such a way that my existing ASP.NET application can continue to use them with minimal code changes and/or awareness that the service they are consuming is no longer in-process.
I'm especially looking for recommendations on how to structure the existing ASP.NET solution and whether to host our new WCF in the same solution or in some new shared WCF solution referenced by both our ASP.NET application and external callers.
Also, is it bad practice to simply promote the DTOs currently only consumed in-process via ASP.NET to full fledged data contracts or is it preferable to create duplicate DTOs that are explicitly decorated with [DataContract]? The latter seems like a maintenance nightmare.
To answer your second question:
Also, is it bad practice to simply promote the DTOs currently only consumed in-process via ASP.NET to full fledged data contracts or is it preferable to create duplicate DTOs that are explicitly decorated with [DataContract]? The latter seems like a maintenance nightmare.
It is considered a bad practice to expose your business model as WCF contracts. So if your DTOs are replicas of your domain model then it would be a strict no-no, because
1. any change in the model would directly effect the contracts and hence all the clients using it
2. you would be exposing your business "know-how" to the outside world.
The latter can tend to get difficult for any evolving system, but then you have various open source tools (like AutoMapper) that ease your mapping nighmares.
You can convert an existing project to WCF, then continue to use it in-process by using a project reference. It can then be consumed by an eternal source using the WCF client. A WCF client converts the class name from ClassName to ClassNameClient when consumed over WCF, but the class will function pretty much the same.
For example:
MyClass obj = new MyClass();
obj.DoSomething(withData);
Would become:
MyClassClient obj = new MyClassClient();
obj.DoSomething(withData);
You would publish the WCF project to some endpoint, like address.example.com, then use a service reference to the endpoint to reference the code, like a project reference, in your other projects.
Note that while the externally referencing projects would not be impacted by the change or know that the data is going over the network, if you have chatty calls to the project in question, it will definitely take a performance hit. You may want to consolidate related methods into single methods to save on round-tripping.
If these are exposed as static page services, there's no magic wrapper -- you're going to need to move code to a standalone service implementation class and put a .svc file in front of it. (Or use WCF4 fileless activation, or a service factory, but that's getting a bit away from the core question here.)
If these are exposed as ASMX, you can actually put an ASMX facade in front of a WCF service class and get basic HTTP/XML/ASMX responses as you would from your legacy ASMX webservices. You an expose that same WCF service class through standard WCF configuration for non-legacy consumers.
Finally, you can expose any WCF service as basicHTTP with serviceMetadata + httpGetEnabled, and you'll get a service endpoint usable by legacy consumers of an ASMX service.
http://msdn.microsoft.com/en-us/library/ms751433.aspx
Since Flashbuilder does not support WCF over https, i am considering to use weborb remoting as alternative, but not really sure how flash is going to know weborb location, if they are sitting on different servers. Looked at destination, source fields, but not really find a field called url in remoteObject in Flex. Has anyone done similar things?
I know this is an old question, but thought I'd answer it anyway. You can expose your WCF services to remoting clients (Flash, Flex) via WebORB. WebORB supports both self-host and IIS-hosted WCF services. Here are links to instructions for both models.
Self-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?standalone_wcf_services.htm
IIS-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?iis_hosted_wcf_services.htm
Both documents address your questions. Here is an example of one approach:
Invoking Self-Hosted Service From Flex/AIR
Flex and AIR clients can use the RemoteObject API to invoke methods on self-hosted WCF services which use the AMF endpoint. There are two approaches for invoking self-hosted WCF service. The first approach requires less code, but creates a dependency on configuration files declaring destinations and channels (the files located in WEB-INF/flex). The second approach does not have any dependencies on the configuration files, but results in a few additional lines of code.Consider the examples of the API below:
Approach 1 (with dependency on configuration files):
var remoteObject:RemoteObject = new RemoteObject("GenericDestination");
remoteObject.endpoint = "http://localhost:8000/WCFAMFExample/amf"
remoteObject.GetQuote.addEventListener( ResultEvent.RESULT, gotResult );
remoteObject.GetQuote.addEventListener( FaultEvent.FAULT, gotError );
remoteObject.GetQuote( "name" );
The endpoint URL uniquely identifies the WCF service. Notice the /amf at the end of the URL, it is required for the AMF endpoint. With the approach demonstrated above, the destination name in the RemoteObject constructor is required however it is not used. As a result, for the code to work, the Flex/AIR application must be compiled with additional compile argument:
-services "C:\Program Files\WebORB for .NET\4.0\web-inf\flex\services-config.xml"
I hope this helps.
K
I started to use silverlight/flex and immediately bumped into the asynchronous service calling. I'm used to solving the data access problems in an OO-way with one server fetch mechanism or another.
I have the following trivial code example:
public double ComputeOrderTotal(Order order)
{
double total = 0;
// OrderLines are lazy loaded
foreach (Orderline line in order.Orderlines)
{
// Article,customer are lazy loaded
total = total + line.Article.Price - order.Customer.discount;
}
return total;
}
If I understand correctly, this code is impossible in Flex/Silverlight. The lazy loading forces you to work with callbacks. IMO the simple expample above will be a BIG mess.
Can anyone give me a structured way to implement the above ?
Edit:
The problem is the same for Flex/Silverlight, pseudo code would
do fine
Its not really ORM related but most orms use lazy loading so i'll remove
that tag
The problem is lazy loading in the model
The above example would be very doable of all data was in memory but
we assume some has to be fetched from
the server
Closueres dont help since sometimes data is already loaded and no asynchronous fetch is needed
Yes I must agree that O/R mapping is usually done on the server-side of your application.
In SilverLight asynchronous way of execution is the desired pattern to use when working with services. Why services? Because as I said before there is no O/R mapping tool at the moment that could be used on the client-side (SilverLight). The best approach is to have your O/R mapped data exposed by a service that can be consumed by a SilverLight application. The best way at the moment is to use Ado.Net DataServices to transport the data, and on the client-side to manage the data using LINQ to Services. What is really interesting about ADS (former Astoria project) is that it is designed to be used with Entity Framework, but the good folks also implemented support for IQueriable so basically you can hook up any data provider that support LINQ. To name few you can consider Linq To Sql, Telerik OpenAccess, LLBLGen, etc. To push the updates back to the server the data source is required to support the ADS IUpdateable.
You can look just exactly how this could be done in a series of blogposts that I have prepared here: Getting Started with ADO.NET Data Services and Telerik Open Access
I can't speak to Silverlight but Flex is a web browser client technology and does not have any database driver embedded in the Flash runtime. You can do HTTP protocol interactions to a web server instead. It is there in the middle-tier web server where you will do any ORM with respect to a database connection, such as Java JDBC. Hibernate ORM and iBATIS are two popular choices in the Java middle-tier space.
Also, because of this:
Fallacies of Distributed Computing
You do not do synchronous interactions from a Flex client to its middle-tier services. Synchronous network operations have become verboten these days and are the hallmark signature of a poorly designed application - as due to reasons enumerated at the above link, the app can (and often will) exhibit a very bad user experience.
You instead make async calls to retrieve data, load the data into your client app's model object(s), and proceed to implement operations on the model. With Flex and BlazeDS you can also have the middle-tier push data to the client and update the client's model objects asynchronously. (Data binding is one way to respond to data being updated in an event driven manner.)
All this probably seems very far afield from the nature of inquiry in your posting - but your posting indicates you're off on an entirely incorrect footing as to how to understand client-side technologies that have asynchronous and event-driven programming baked into their fundamental architecture. These RIA client technologies are designed this way completely on purpose. So you will need to learn their mode of thinking if you want to have a good and productive experience using them.
I go into this in more detail, and with a Flex perspective, in this article:
Flex Async I/O vs Java and C# Explicit Threading
In my direct experience with Flex, I think this discussion is getting too complicated.
Your conceptual OO view is no different between sync and asynch. The only difference is that you use event handlers to deal with the host conversation in the DAL, rather than something returned from a method call. And that often happens entirely on the host side, and has nothing to do with Flex or Silverlight. (If you are using AIR for a workstation app, then it might be in client code, but the same applies. As well if you are using prolonged AJAX. Silverlight, of course, has no AIR equivalent.)
I've been able to design everything I need without any other changes required to accomodate asynch.
Flex has a single-threaded model. If you make a synchronous call to the web server, you'd block the entire GUI of the application. The user would have a frozen application until the call completes (or times out on a network error condition, etc.).
Of course real RIA programs aren't written that way. Their GUI remains accessible and responsive to the user via use of async calls. It also makes it possible to have real progress indicators that offer cancel buttons and such if the nature of the interaction warrants such.
Old, bad user experience web 1.0 applications exhibited the synchronous behaviour in their interactions with the web tier.
As my linked article points out, the async single-threaded model coupled with ActionScript3 closures is a good thing because it's a much simpler programming model than the alternative - writing multi-thread apps. Multi-threading was the approach of writing client-server Java Swing or C# .NET WinForm applications in order to achieve a similarly responsive, fluid-at-all-times user experience in the GUI.
Here's another article that delves into this whole subject matter of asynchronous, messaging/event-driven distributed app architecture:
Building Effective Enterprise Distributed Software Systems
data-driven communication vs behavior-driven communication
Silverlight is a client technology and the Object - Relational mapping happens completely in the server. So you have to forgot about the ORM in Silverlight.
Following your example what you have to do is to create a webservice (SOAP, REST...) that can give your silverlight client the complete "Order" object.
Once you have the object you can work with it with no communication with the server in a normal - synchronous way.
Speaking about Silverlight, you should definitely check RIA services.
Simply, it brings the DataContext from the server to the client from where you can asynchronously query it (there is no need to write WCF services by hand, it's all done by RIA).
C# 5
async / await construct will almost exactly what I want..
watch presentation by anders hejlsberg