Are the Model-driven architecture feasible? - platform-independent

I want to ask some questions about MDA:
First, I know that there exist tools that you can write code in one language and then the code be generated in another, for example in 'GWT' Java code is transformed to Javascript, or some mobile development tools that transform HTML+Javascript to native code for different platforms. Are these transformations consider as MDA or is something different?
Except that using MDA part of the code is generated automatically are there any other reason that increase programmer's productivity?
Compared to traditional development, will switching to MDA worth the value (immediately) and is that possible in reality?

your knowledge about MDA is less or defective.
main goal of MDA is modeling and in all levels (cim-oim-psm) we are working with model and write 0 line of code, yes input and output of MDA levels is model.
another important activity in MDA is transformation that there are many transformation tools.
i think you should studding first about MDA and transformation (also stereotype).

...
Are these transformations consider as MDA or is something different?
In general, no. But you can use MDA approach for making such transformations.
Except that using MDA part of the code is generated automatically are there >any other reason that increase programmer's productivity?
Yes, it gives you clear picture of system which is free from platform specific details.
Compared to traditional development, will switching to MDA worth the value > (immediately) and is that possible in reality?
Yes, it worth the value, not immediately, but in long run. Yes, switching to MDA is possible in reality.

Related

What is the main idea of OpenFOAM?

I just want to get the main idea/principle of openFOAM and how you create a simulation, please let me know where I go wrong,
So basically you have a object that interacts with gas or liquid and you want to simulate this, so you create model of the object, mesh it, specify where the gas will flow in and out and what are the walls, and set the other correct parameters and then run the program (with the approprate time step etc)?
OpenFOAM is an open source C++ library which implements the finite volume method (FVM), which is widely used in CFD.
What you have explained is a vague understanding of some of the applications of CFD. Those things you specified might not always be the case (i.e. the fluid might not necessarily be (a) gas and so on.
The main stages of a CFD problem are: making the geometry - mesh generation - preprocess - solving - postprocess.
There might be more stages added depending on the resolution and other specifics of the case.
Now OpenFoam is an open source (free for all) tool which is in C++ and helps solve the CFD problems. If the problem is simple and routine, and you have access to a commercial solver such as ANSYS fluent, then you can use that since it is easier and much less work if the problem is not specific. However, if the problem is specific and there are customized criteria OpenFoam is a nice tool.
It is written in C++ thus it is object oriented and also there are many many different solvers already written and available to use, so you will not have to write all the schemes and everything on your own from scratch.
However, my main advice to you is to read more about CFD to have a clear understanding, there are tens of good books avaiable.

Project idea using Eigen

I have been reading the documentation and playing with Eigen recently:
docs
and would like to build something that uses it extensively to learn it well. I looked on their website and they mention various projects that use it - like Google Ceres. Something like that might be too large for one or two people to undertake on the side as an Eigen learning experience so I'm looking for something simpler but not trivial that would use it extensively and is a real - useful - application..
Eigen is extensively used in computer vision, and if you are comfortable with linear algebra and matrix calculus (I assume you are, otherwise you wouldn't use Eigen), why not build a toy VSLAM (visual simultaneous localization and mapping) system? Those that are based on bundle adjustment (there's a whole chapter dedicated to that in zisserman's book on multipleview geometry, and it is also discussed in the open-source yet excellent book enter link description here) can be very tricky to implement efficiently, and will take a lot of time, but since your goal is to learn Eigen, performance shouldn't be that much of an issue. If that seems too hard/long for two people, and you think it demands too much energy for a side-project, I recommend that you select some computer-vision algorithms like those that compute the the essential matrix between two images, or are used in 3d pose estimation. Well, those are the only really fun things that come to mind right now, and they will force you to discover a lot of Eigen's functionalities (and gotchas!).

How much is Eclipse EMF related to the OMG MDA standard?

I am looking for a new MDA tool to try out for modelling and code generation. This is not for any work related project yet, but for testing purposes. I only used the Merode approach until now (using jMermaid for modelling and the accompagnied code generator) but want to try out something new.
Since EMF is integrated in Eclipse I see a lot of positive reasons to try it out. But after reading some documentation and online articles, I wonder how much it adopts the OMG MDA standards and how much it doesn't.
For example I found the following text
If, on the other hand, you have already bought into the idea of modeling, and even the Model Driven Architecture (MDA) big picture,3 you should think of EMF as a technology that is moving in that direction, but more slowly than immediate widespread adoption. You can think of EMF as MDA on training wheels.
on http://www.informit.com/articles/article.aspx?p=1323360&seqNum=2
But I can nowhere find a concise list of what points of the OMG standard are implemented and which ones are left out or interpreted differently. Anyone that can help out with that?
(And if there are other, more recommended tools, I'm always open to suggestions.)
There is very little relation. EMF is a framework to create (meta)models with very basic code-generation capabilities (basically only a Java direct translation). EMF's goal is not to be an MDA framework but to be the building block on top of which other tools may build more sophisticated solutions (e.g. check the open soruce Eclipse Acceleo tool).
And MDA is just a philosophy. Itself is not even a specific method. The MDA guide, the OMG standard document explaining MDA, is just a set of principles for model-driven development using OMG technologies but does not go further than that (if needed you may want to check the difference between all these MD* acronyms).
So, you can find EMF-based tools that follow MDA principles but EMF as such does not pretend to do so.
In EMF FAQ there is question "What is the relationship of EMF to OMG MDA?" which states
"Essentially EMF supports the key MDA concept of using models as input
to development and integration tools which produce multiple
programming language (Java in the case of Eclipse EMF itself) or data
interchange format (XML) representations."
EMF corresponds to a simplified OMG's MOF implementation (http://www.omg.org/mof/), providing facilities to express custom metamodels and generate java components to instantiate models.
MDA is a particular model-driven philosophy, based on several kind of models (CIM, PIM, PSM...), and aiming to provide a way to target several technical architectures (PSM) from a unique functional model (PIM).
You can use EMF for any model-driven philosophy MBE, MDE, MDD, or MDA. It is the fundamental building block that allows you to define your own metamodels and models. Simply said, EMF provides models, and you can use it for any model-driven approach, including MDA.

RaptorQ FEC Implementation Obstacle

I am trying to implement the RaptorQ Forward Error Correction Scheme in java as specified here:
https://datatracker.ietf.org/doc/html/draft-ietf-rmt-bb-fec-raptorq-04#section-5.3.3
The core of the problem is actually to execute gaussian elimination on a matrix A in a smart way to be fast.
The matrix A is composed of submatrices, among others these are G_LDPC,1 and G_LDPC,2.
(Generator matrices for Low Density Parity Checks)
On page 22 in section "5.3.3.3. Pre-coding relationships" it is stated that this matrices can be decuced from the code snippet on the same page.
My Problem: I am not able to derive the structure of these two submatrices from the code snipped.
Does someone see how to do that, or how the structure looks like?
Thanks for any kind of help!
Max
I'm also trying to implement RaptorQ, and ran into this exactly same problem. My suggestion is this book:
Raptor Codes (Foundations and Trends(R) in Communications and Information Theory) [Paperback]
Amin Shokrollahi (Author), Michael Luby (Author)
It has a better explanation on constructing the constraint matrix in section 3.3.3 (I'd quote it, but I don't have it digital).
#Max anyway we can chat or you can share your RFC5053 implementation? I really could use someone familiar with these difficulties to talk to and share some doubts/ideas.
After being stuck with the problem, I decided to implement the Raptor codec according to RFC 5053 as described here:
https://www.rfc-editor.org/rfc/rfc5053
This is actually the predecessor version of RaptorQ.
The general working principle seems to be the same, but it is less optimized and therefore has worse properties, especially in sense of reception efficiency.
But on the other hand it was less complex and more intuitive to me, and therefore I was able to code a working implementation in Java.
And after all, I have to admit that I'm very astonished by the capabilities of the created codec!
With the deeper understanding gained during coding the RFC 5053 implementation I was probably also able to realize the RaptorQ codec now.

Refactoring for Testability on an existing system

I've joined a team that works on a product. This product has been around for ~5 years or so, and uses ASP.NET WebForms. Its original architecture has faded over time, and things have become relatively disorganized throughout the solution. It's by no means terrible, but definitely can use some work; you all know what I mean.
I've been performing some refactorings since coming on to the project team about 6 months ago. Some of those refactorings are simple, Extract Method, Pull Method Up, etc. Some of the refactorings are more structural. The latter changes make me nervous as there isn't a comprehensive suite of unit tests to accompany every component.
The whole team is on board for the need to make structural changes through refactoring, but our Project Manager has expressed some concerns that we don't have adequate tests to make refactorings with the confidence that we aren't introducing regression bugs into the system. He would like us to write more tests first (against the existing architecture), then perform the refactorings. My argument is that the system's class structure is too tightly coupled to write adequate tests, and that using a more Test Driven approach while we perform our refactorings may be better. What I mean by this is not writing tests against the existing components, but writing tests for specific functional requirements, then refactoring existing code to meet those requirements. This will allow us to write tests that will probably have more longevity in the system, rather than writing a bunch of 'throw away' tests.
Does anyone have any experience as to what the best course of action is? I have my own thoughts, but would like to hear some input from the community.
Your PM's concerns are valid - make sure you get your system under test before making any major refactorings.
I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). This is chock full of good ideas for how to break down those couplings and dependencies you speak of, in a safe manner that won't risk introducing regression bugs.
Good luck with the refactoring programme; in my experience it's an enjoyable and cathartic process from which you can learn a lot.
Can you re-factor in parallel? What I mean is re-write the pieces you want to refactor using TDD, but leave the existing code base in place. Then phase out the existing code when your new tests meet the needs for your PM?
I would also like to throw in a suggestion to visit the Refactoring website by Martin Fowler. He literally wrote the book on this stuff.
As far as introducing unit tests into the equation the best method I have found is to find a top level component and identify all the external dependencies it has on concrete objects and replace them with interfaces. Once you've done that it will be a lot easier to write unit tests against your code base and you can do it one component at a time. Even better, you won't have to throw away any unit tests.
Unit testing ASP.Net can be tricky, but there are plenty of frameworks that make it easier to do. ASP.Net MVC, and WCSF to name a few.
Just tossing out a second recommendation for Working Effectively with Legacy Code, an excellent book that really opened my eyes to the fact that almost any old / crappy / untestable code can be wrangled!
Totally agree with the answer from Ian Nelson. Additionally I would start to get some "high level" tests (functional or component tests) in place to preserve the behaviour from the view point of the user. This point might be the most important concern for your PM.

Resources