I have fairly significant application written with Dart and Polymer which uses reflection in a factory method and runs fairly well in Dartium. The factory generates subclass instances using the subclass name passed to it as a parameter.
I'm fine with never generating Javascript and forcing my users, if any, to use Dartium. I'm also fine with modifying any #MirrorsUsed annotations when the list of instantiable subclasses changes. The page at http://dovdev.com/smoke-and-mirrors/ seems to imply that performance and/or codesize can be greatly improved, even in Dartium, by the use of Smoke.
How much does Dartium, or the Dart analyzer, do when running a Dart app? Will Smoke, or even just #MirrorsUsed annotations, do anything for an app in Dartium?
This sounds like you want to use Dartium in production which is definitely a bad idea.
Currently dev_compiler, a fast incremental Dart-2-JS compiler is work in progress to allow to use Chrome as development browser and to make Dartium redundant and discontinue it eventually.
In Dartium #MirrorsUsed() and Smoke don't matter.
If you are using an in-house application where you even consider using Dartium for production, perhaps the code-size effect of using mirrors might not even matter too much.
Related
Not looking for any recommendations, just an objective assessment if any JavaScript framework can be type-checked with Flow in the current state.
With Flow decreasing in popularity compared to TypeScript, framework declaration files tend to be written in TypeScript, and conversion is not trivial nor automatic. Is there still a framework that works well with Flow type inference, or for which you can write your own framework declarations on the fly? Or is Flow mostly used for framework-agnostic business logic today?
So the main one is react, given that its type defs are built directly into the flow project. The other which I haven't looked into personally is Vue, given that it's written in flowtype. But I cannot confirm how to get their type defs as I haven't used it personally.
But even if a library doesn't ship with type defs, it doesn't mean it doesn't support flow. One clear example is styled-components, it's built in flow with first class flow support but do not ship out of the box. Instead their defs are shipped via flow-typed. I'm not sure what their reasoning is, but most likely it's to remove coupling of flow version with styled-component version, and consumers can upgrade each independently.
Overall if you can't find a lib def readily, either not many people use it with flow or the consumers don't bother and just use the types as any. Since there are many projects in the world that don't use any static type checker, having partial static analysis may be good enough.
Answering my own question, I think looking into the flow-typed repo and look for a particular lib or framework will answer this. No recent update = no support, unless you have time and interest to make a PR yourself.
https://github.com/flow-typed/flow-typed
In my project there are lots of Static methods and all are inturn hitting the DB. I am supposed to write Unit Test for the project but often struck with as all the methods are static and they are hitting DB. Is there any way to overcome this. Sorry for being abstract in the question but my concern is what is the way to write unit test for static methods and those hitting DB. MOQ is not useful when the methods are static and also in my project one method is calling other method within the same class. So in this case i cannot MOQ the inside method as both are in the same class.
The project I'm currently in is lot worse than what you have described. It is a blue print of an un-testable system. There are couple of options I think, but it all depends on your situation.
Write Integration test, which hits the database, and test multiple components together. I know this is not ideal, but it at least give some confidence on the work you do. Then try to refactor your code in a small step at a time, (be sure to take baby steps) and write Unit tests around that code. Make sure your integration tests continue to pass. You are still allowed to refactor your intergeneration type tests, if the semantics are changed.
This might not be easier as I said, and it takes time. That's why I said it is depends on your situation.
Another option would be (I know many people do this with legacy code) to use one of those pricey Isolation frameworks such as Isolator, MS Fakes perhaps to fake out those un testable dependencies. Once those tests written you can look at re factoring the code to make it more testable.
Somebody that I work with and respect once remarked to me that there shouldn't be any need for the use of reflection in application code and that it should only be used in frameworks. He was speaking from a J2EE background and my professional experience of that platform does generally bear that out; although I have written reflective application code using Java once or twice.
My experience of Ruby on Rails is radically different, because Ruby pretty much encourages you to write dynamic code. Much of what Rails gives you simply wouldn't be possible without reflection and metaprogramming and many of the same techniques are equally as applicable and useful to your application code.
Do you agree with the viewpoint that reflection is for frameworks only? I'd be interested to hear your opinions and experiences.
There's the old joke that any sufficiently sophisticated system written in a statically-typed language contains an incomplete, inferior implementation of Lisp.
Since your requirements tend to become more complicated as a project evolves, you often eventually find that the common idioms in statically-typed object systems eventually hit a wall. Sometimes reaching for reflection is the best solution.
I'm happy in dynamically-typed languages like Ruby, and statically-typed languages like C#, but the implicit reflection in Ruby often makes for simpler, easier-to-read code. (Depending on the metaprogramming magic required, sometimes harder to write).
In C#, I've found problems that couldn't be solved without reflection, because of information I didn't have until runtime. One example: When trying to manipulate some third-party code that generated proxies to Silverlight objects running in another process, I had to use reflection to invoke a specific strongly-typed "Generic" version of a method, because the marshalling required the caller to make an assumption about the type of the object in the other process was in order to extract the data we needed from it, and C# doesn't allow the "type" of the generic method invocation to be specified at run time (except with reflection techniques). I guess you could argue our tool was kind of a framework, but I could easily imagine a case in an ordinary application facing a similar problem.
Reflection makes DRY a lot easier. It's certainly possible to write DRY code without reflection, but it's often much more verbose.
If some piece of information is encoded in my program in one way, why wouldn't I use reflection to get at it, if that's the easiest way?
It sounds like he's talking about Java specifically. And in that case, he's just citing a special case of this: in Java, reflection is so wonky it's almost never the easiest way to do something. :-) In other languages like Ruby, as you've seen, it often is.
Reflection is definitely heavily used in frameworks, but when used correctly can help simplify code in applications.
One example I've seen before is using a JDK Proxy of a large interface (20+ methods) to wrap (i.e. delegate to) a specific implementation. Only a couple of methods were overridden using a InvocationHandler, the rest of the methods were invoked via reflection.
Reflection can be useful, but it is slower that doing a regular method call. See this reflection comparison.
Reflection in Java is generally not necessary. It may be the quickest way to solve a certain problem, but I would rather work out the underlying problem that causes you to think it's necessary in app code. I believe this because it frequently pushes errors from compile time to run time, which is always a Bad Thing for large enough software that testing is non-trivial.
I disagree, my application uses reflection to dynamically create providers. I might also use reflection to control logic flow, if the logic is simple and doesn't warrant a more complicated pattern.
In C# I use reflection to grab attributes off Enumeration which help me determine how to display an enumeration to an end user.
I disagree, reflection is very useful in application code and I find myself using it quite often. Most recently, I had to use reflection to load an assembly (in order to investigate its public types) from just the path of the assembly.
Several opinions on this subject are expressed here...
What is reflection and why is it useful?
Use reflection when there is no other way! This is a matter of performance!
If you have looked into .NET performance pitfalls before, it might not surprise you how slow the normal reflection is: a simple test with repeated access to an int property proved to be ~1000 times slower using reflection compared to the direct access to the property (comparing the average of the median 80% of the measured times).
See this: .NET reflection - performance
MSDN has a pretty nice article about When Should You Use Reflection?
If your problem is best solved by using reflection, you should use it.
(Note that the definition of 'best' is something learnt by experience :)
The definition of framework vs. application isn't all that black & white either. Sometimes your app needs a bit of framework to do its job well.
I think the observation that there shouldn't be any need for the use of reflection in application code and that it should only be used in frameworks is more or less true.
On the spectrum of how coupled some piece of code are, code joined by reflection are as loosely coupled as they come.
As such, the code which is doing it's job via reflection can quite happily fulfil it's role in life knowing not-a-thing about the code which is using it.
I am working on some code coverage for my applications. Now, I know that code coverage is an activity linked to the type of tests that you create and the language for which you wish to do the code coverage.
My question is: Is there any possible way to do some generic code coverage? Like in, can we have a set of features/test cases, which can be run (along with a lot more specific tests for the application under test) to get the code coverage for say 10% or more of the code?
More like, if I wish to build a framework for code coverage, what is the best possible way to go about making a generic one? Is it possible to have some functionality automated or generalized?
I'm not sure that generic coverage tools are the holy grail, for a couple of reasons:
Coverage is not a goal, it's an instrument. It tells you which parts of the code are not entirely hit by a test. It does not say anything about how good the tests are.
Generated tests can not guess the semantics of your code. Frameworks that generate tests for you only can deduct meaning from reading your code, which in essence could be wrong, because the whole point of unittesting is to see if the code actually behaves like you intended it too.
Because the automated framework will generate artificial coverage, you can never tell wether a piece of code is tested with a proper unittest, or superficially tested by a framework. I'd rather have untested code show up as uncovered, so I fix that.
What you could do (and I've done ;-) ) is write a generic test for testing Java beans. By reflection, you can test a Java bean against the Sun spec of a Java bean. Assert that equals and hashcode are both implemented (or neither of them), see that the getter actually returns the value you pushed in with the setter, check wether all properties have getters and setters.
You can do the same basic trick for anything that implements "comparable" for instance.
It's easy to do, easy to maintain and forces you to have clean beans. As for the rest of the unittests, I try to focus on getting important parts tested first and thouroughly.
Coverage can give a false sense of security. Common sense can not be automated.
This is usually achieved by combining static code analysis (Coverity, Klockwork or their free analogs) with dynamic analysis by running a tests against instrumented application (profiler + memory checker). Unfortunately, this is hard to automate test algorythms, most tools are kind of "recorders" able to record traffic/keys/signals - depending on domain and replay them (with minimal changes/substitutions like session ID/user/etc)
I just finished a small project where changes were required to a pre-compiled, but no longer supported, ASP.NET web site. The code was ugly, but it was ugly before it was even compiled, and I'm quite impressed that everything still seems to work fine.
It took some editing, e.g. to remove control declarations, as they get put in a generated file, and conflict with the decompiled base class, but nothing a few hours didn't cure.
Now I'm just curious as to how many others have had how much success doing this. I would actually like to write a CodeProject article on defining, if not automating, the reverse engineering process.
Due to all the compiler sugar that exists in the .NET platform, you can't decompile a binary into the original code without extremely sophisticated decompilers. For instance, the compiler creates classes in the background to handle enclosures. Automating this kind of thing seems like it would be a daunting task. However, handling expected issues just to get it to compile might be scriptable.
Will:
Due to all the compiler sugar that exists in the .NET platform
Fortunately this particular application was incredibly simple, but I don't expect to decompile into the original code, just into code works like the original, or maybe even provides an insight into how the original works, to allow 'splicing' in of new code.
i had to do something similar, and i was actually happier than if i had the code. it might have taken me less time to do it, but the quality of the code after the compiler optimized it was probably better than the original code. So yes, if its a simple application, is relatively simple to do reverse engineer it; on the other hand i would like to avoid having to do that in the future.
If it was written in .NET 1.1 or .NET 2.0 you'll have a lot more success than anything compiled with the VS 2008 compilers, mainly because of the syntactic suger that the new language revisions brought in (Lambda, anonymous classes, etc).
As long as the code wasn't obfuscated then you should be able to use reflector to get viable code, if you then put it into VS you should immidiately find errors in the reflected code.
Be on the look out for variables/ method starting with <>, I see that a lot (particularly when reflecting .NET 3.5).
The worst you can do is export it all to VS, hit compile and determine how many errors there are and make a call from that.
But if it's a simple enough project you should be able to reverse engineer from reflector, at least use reflector to get the general gist of what the code is doing, and then recode yourself.