What is IDL? - idl

What is meant by IDL? I have googled it, and found out it stands for Interface Definition Language, which is used for interface definition for components. But, in practice, what is the purpose of IDL? Does Microsoft use it?

An interface definition language (IDL) is used to set up communications between clients and servers in remote procedure calls (RPC). There have been many variations of this such as Sun RPC, ONC RPC, DCE RPC and so on.
Basically, you use an IDL to specify the interface between client and server so that the RPC mechanism can create the code stubs required to call functions across the network.
RPC needs to create stub functions for the client and a server, using the IDL information. It's very similar to a function prototype in C but the end result is slightly different, such as in the following graphic:
+----------------+
| Client |
| +----------+ | +---------------+
| | main | | | Server |
| |----------| | | +----------+ |
| | stub_cli |----(comms)--->| stub_svr | |
| +----------+ | | |----------| |
+----------------+ | | function | |
| +----------+ |
+---------------+
In this example, instead of calling function in the same program, main calls a client stub function (with the same prototype as function) which is responsible for packaging up the information and getting it across the wire to another process, via the comms channel.
This can be the same machine or a different machine, it doesn't really matter - one of the advantages of RPC is to be able to move servers around at will.
In the server, there's a 'listener' process that will receive that information and pass it to the server. The server's stub receives the information, unpacks it and passes it to the real function.
The real function then does what it needs to and returns to the server stub which can package up the return information (both return code and any [out] or [in,out] variables) and pass it back to the client stub.
The client stub then unpacks that and passes it back to main.
The actual details may differ a little but that explanation should be good enough for a conceptual overview.
The actual IDL may look like:
[ uuid(f9f6be21-fd32-5577-8f2d-0800132bd567),
version(0),
endpoint("ncadg_ip_udp:[1234]", "dds:[19]")
] interface function_iface {
[idempotent] void function(
[in] int handle,
[out] int *status
);
}
All that information at the top (for example, uuid or endpoint) is basically networking information used for connecting client and server. The "meat" of it is inside the interface section where the prototype is shown. This allows the IDL compiler to build the function client and server stub functions for compiling and linking with your client and server code to get RPC working.
Microsoft does use IDL (I think they have a MIDL compiler) for COM stuff. I've also used third party products with MS operating systems, both DCE and ONC RPC.

There is also Interactive Data Language which I had a job using for scientific data analysis, but perhaps from the context it's clear to you that's not what this IDL stands for.

IDL is an acronym for Interface Definition Language of which there are several variations depending on the vendor or standard group that defined the language. The goal of an IDL is to describe the interface for some service so that clients wanting to use the service will know what methods and properties, the interface, the service provides. IDL is normally used with binary interfaces and the IDL language file describes the data types used in the binary interface.
There are several different standards for binary components, typically COTS or Commercial Off The Shelf, and how a client communicates with the binary component can vary though traditionally some version of Remote Procedure Call or RPC is used. Two such standards are the Microsoft Common Object Model or COM standard and the Common Object Request Broker or CORBA standard. There are other standards for components such as Firefox plugins or plugins for other applications such as Visual Studio itself however these do not necessarily use some form of Interface Description Language using instead some kind of a Software Development Kit or SDK with standardized and well known interfaces to an API.
What an IDL allows is a greater degree of flexibility in being able to create components offering services of various kinds which, due to their binary nature, may be used with a variety of different programming languages and a variety of different environments.
Microsoft uses a dialect of IDL with COM objects and the Microsoft IDL is not the same as CORBA IDL though there are similarities since they share common language roots. The IDL file contains the description of the interfaces supported by a COM object. COM allows for the creation of In Process services (may use RPC, or direct DLL calls) or Out of Process services (uses RPC). The idea behind COM is that the client only needs to know the identifier for the component along with the interface to be able to use it. The client requests the COM object then requests a class object from the COM object's factory which supports the interface the client wants to use and then uses the COM object through that interface.
Microsoft provides the MIDL compiler which processes an IDL file to generate the type library, providing information to users of the COM object about the interface, and the necessary stubs for marshaling data across the interface between client and service.
Marshaling of data basically means the stub takes the data provided by the client, packages it up and sends it to the service which performs some action and sends data back. This sending and receiving of data may be through some RPC service or through direct DLL function calls. The response from the service is translated into a form suitable for the client and then provided to the client. So basically the marshaling functionality is an adapter (see the adapter design pattern) or bridge (see the bridge design pattern) between the client and the service.
Visual Studio, my experience is with C++, contains a number of wizards that can be used for generating an example so that you can play with this. If you are interested you can create a work space and then in the work space you can create an ATL project to generate a control and then a simple MFC dialog project to test it out. Using ATL for your COM control hides quite a few details that you can investigate later and the simple MFC dialog project provides an easy way to create a container. You can also use the ActiveX Control Test Container tool, available in Visual Studio, to do preliminary testing and to see how methods and properties work.
There are also a number of example projects on web sites such as codeproject.com. For instance here is one using C to expose all the ugly plumbing behind COM and here is one using C++ without ATL.

It's a language that has been used in the COM era to define interfaces in a (supposedly) language-neutral fashion.

It defines the interface to be used for communication with an exposed service in another application.
If you use SOAP you'll know about WSDL. The WSDL is another form of an IDL. IDL usually refers to Microsoft COM or CORBA IDL.

IDL is vital in 2 cases.
1. To create proxy/stub dlls for exe servers.
2. To create Type library for automation servers.
There is very good article for basics of IDL at link
To study IDL, it is better to read compilers's own idl header files which are given include subdirectory of VC++ package.

Related

Call remote test library constructor as keyword from Robot Framework

Is it possible to call Test Library constructor from Robot Framework?
Using Remote library interface (NRobot.Server) to connect from RF to Test Library (implemented in C#).
Currently its exposing all public methods implemented under Test Library except constructors.
There are multiple Test Libraries in our project where some functionality implemented as part of constructors.
Hence need a way to call constructor as a test step to execute certain functionality whenever required.
If not possible then may need to move functionality from constructors to new public methods. But want to avoid that if possible.
Thanks in advance...
In short - no.
When calling a remote library, you're actually just the client in an XML-RPC comm protocol; it is the server's responsibility to have the library instantiated, so it (the very same library) can process your instructions and act as needed. Thus normally the library is already instantiated when you call it from your RF code - too late to invoke its constructor.
Naturally, this can be implemented differently - for the remote library server to instantiate the target library on a (special) call, and thus you'll to be able to provide constructor arguments, but that is library design/code change required in it.
This is in contrast of using local libraries, where they are instantiated in your local interpreter, on their import.

Web Services Model

I have 1 Site (MySite.com) / 1 Web Service (WebService.MySite.Com) and one Common Library (LibCommon)
The common Library Contains a Model e.g. UserModel = LibCommon.UserModel
The web service has a method 'Void CheckUser(LibCommon.UserModel model)'
However when I add the 'WebService' reference to 'MySite.com' the method changes so that it looks like 'Void CheckUser(WebService.MySite.Com.UserModel model)'
So I think fair enough I can just cast one object to the other as they are identical however .NET Says I cannot do this?
Is there a work around for this?
Cheers,
Note this is for WCF, and not ASMX web services:
You can't directly cast the original data class to the proxied class generated by the WCF service reference wizard. However, you can reuse the original data class in the client:
Add the library reference containing the transfer objects (i.e. LibCommon) as a reference to both the Service (WebService) and the Client (Mysite.com). When adding the service reference on the client, choose the advanced tab and then select Reuse types in referenced assemblies. This will then reuse the common data transfer classes, instead of duplicating the types with proxies.
Note however that by eliminating the proxied data class, you are introducing direct coupling between client and server - you should do this only if you have control over both client and server w.r.t. version control issues etc (e.g. able to deploy new versions of both client and server simultaneously)
As an aside, it is also possible to eliminate the shared service interface as well, by moving the server side Service contract interface into a separate, common assembly and then using a technique such as this or this.

How to solve state-stateless in a client-server application?

I've read some books on creating stateless websites, I've read some about stateful client applications, but a lot of complexity comes along when you have to combine both. We have a Flex application that needs to persist data to a database via .NET services. Things to keep in mind are:
- Concurrency (optimistic/pessimistic)
- Performance: Flex needs to load in lots of data so lazy-loading is often necessary.
- Do you use Dto's to tranfer data between server and client?
I'll tell you the history of our product. We've used SubSonic from the beginning as a o/r mapper. SubSonic objects are converted to dto's written by us and these dto's are transferred to the client. Clientside the dto's are converted to the domain model. If clientside a domain model object needs to be saved, it is converted back to a dto and send to the server. Server side the dto is converted to a subsonic object and saved to the database.
Now, some time ago, we needed the domain model on the .NET server side... so now we have like three models on the server side, the subsonic model, the dto model and the domain model. The dto model is more simple and resembles the database more, the domain model has much more logic. It gets complex... We now have to synchronize the AS3 domain model code with the C# domain model code. If we could do it again (of get time to refactor) I think we wouldn't use the dto's anymore, but transfer the domain model between client and server. Question is if this is realistic. Dto's are simple objects so easy to transfer. Domain model objects can be very complex.
Are there books on how to create an architecture for these kind of applications? Books writte by someone with lots of experience? Do you have experience with this?
The reality is that sharing objects between the client and the server is quite complex. Here's what you need to make it happen:
The easy/non-scalable way:
Inherit all of your objects from MarshalByrefObject. If you create Object A on the server, and send it to the client, any client modifications to the object will automatically be forwarded to the server.
While this sounds like the perfect solution, it has two major problems:
The client and server are tightly coupled with .NET (bye-bye Web Services)
It can be a performance nightmare. All method/property access will be forwarded to the server. If you choose this route, your objects should really be designed for chunky calls, not chatty ones.
The scalable/hard way:
Instead of using MarshalByRefObject, you would use DataContract/Serializable objects. However:
If you create Object A on the server, and send it to the client,
the client will receive a copy of the object (let's call it Object B)
When you send Object B back to the server, the server will receive a
copy of Object B (let's call it Object C)
But you really want the server to treat Object A and Object C as the same. Unfortunately, the CLR cannot do this, so you'll need an Object Merger to sit on both the client and the server.
The Object Merger would contain a dictionary of all objects within the model, and know how to identify two instances as being the same, and merge any values from the receiving end. For instance, if the client already has Object C in memory, and receives an updated copy from the server, it would copy over the values.
Unfortunately, this is also fraught with problems, because you need to ensure that object references are preserved correctly. You can't just blindly update all properties on an object, because the object may have existing references to other objects, which in turn may require their own merging. On top of all this, you would also need to track added/removed objects contained in lists or dictionaries.
I adding n-tier support to my own framework, so I'm going through the same exercise right now (I'm taking the "scalable/hard" route). Fortunately, I have a lot of the supporting infrastructure in-place to assist with identification, merging, etc. If you're starting from scratch, it would be a significant piece of work.
P.S. Add lazy-loading proxies into the mix (I'm using Nhibernate), and it gets even more interesting...
Go read anything by Fowler, particularily his design patterns stuff (especially the assembler pattern and why you need what you are already doing)
Fowler's Patterns Of Enterprise Application Architecture

ASP.NET, IIS and COM

I'm not that familiar with COM and was hoping that someone out there, who is, could help verify what I have below is correct.
If I have two completely separate Requests (request 1 & request 2), then this creates two separate instances of my WebApplication. So far so boring.
If each instance then contacts the SAME web service, then presumably two instances of the Web Service are also instantiated.
This is where it gets interesting.
These web services create a .NET assembly which then references an in-process (registered via regsvr32) COM-dll (via Interop).
Is my diagram correct?
This COM-DLL connects to the database, performs a query, returns data to the web service which then returns the data in JSON to the client. All done AJAXy.
The other question I have is - is this okay performance-wise? I don't see why it shouldn't scale, and be able to return data to the user
Seems OK as a logical pattern. But, as always. the devil is in the detail.
This all hinges on the implementation of your Services use of COM components and specifically the COM components handling of threads. If your COM components are thread-safe and marked to use an MTA (Multi threaded apartment) you should be OK. However many COM objects are marked as STA (and so use the Single Threaded Apartment)
In relation to "is this okay performance-wise?", then if your COM component is an STA (which it will be if it was created in VB 6.0) you will have to do a bit of thread untangling (otherwise all your service requests will queue up and performance will get worse under load.)
This article explains both the problem and the solution to this (for ASMX services)...
http://msdn.microsoft.com/en-us/magazine/cc163544.aspx
..and solution if you're using WCF services...
http://blogs.catalystss.com/blogs/scott_seely/archive/2007/09/27/203.aspx

DDD and Client/Server apps

I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences.
We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container.
So some questions arise:
should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance)
should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client.
how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time)
In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.
I would not expose repositories directly to the client. The first big problem as you mention is security: you can't trust the client, so you cannot expose your data access API to potentially hostile clients.
Wrap your repositories with services on the server and create a thin delegate layer in the client that handles the remote communication.
Exposing your Entities is not necessarily a bad practice it's just that it becomes problematic when you start to factor in things like lazy loading, sending data over the wire the client doesn't need, etc. If you write a DTO class which wraps one or more entities and delegates get/set calls you can actually build up a DTO layer pretty quickly, especially using code generation available in most IDEs.
The key to all of this is that a set of patterns should really only apply to a part of your application, not to the whole thing. The fact that you have rich logic in your domain model and use repositories for data access as part of DDD should not influence the client in any way. Conceptually the RIAs that I build have three layers:
Client uses something like MVC, MVP or MVVM to present the UI. The Model layer eventually calls into...
What I might call the "Integration Layer." This is a contract of services and data objects that exist on both the client and server to allow the two to coordinate. Usually the UI design drives this layer so that (A) only the data that the client needs is passed to it and (B) data access can be coarse-grained, i.e. "make one method call for all the state needed for this set of UI.
Server using whatever it wants to handle business logic and data access. This might be DDD or something a little more old school, like a data layer built using stored procs in the DB and a lot of "ResultSet" or "DataTable" objects.
The point really is that the client and server are both very different animals and they need to vary independently. In order to do so, you need a layer inbetween that is a fair compromise between the needs of the UI and the reality of how things might need to be on the server.
The one big advantage that Silverlight/WPF and JavaFX have over Flex + anything is that you can use a lot of logic in the first two because you have the same VM on both sides of the app. Flex is the best UI technology hands down but it lacks a server component where code could be shared and re-used more effectively.

Resources