D3D11 in Metro doesn't support D3DReflect? (Why Not?) - reflection

D3D11 in Metro doesn't support D3DReflect.
Why not?
My API uses this to dynamically get the constant buffer sizes of shaders.
Is there any other way to get a constant buffer size dynamically in D3D11 without a ID3D11ShaderReflection object? Or get the constant variables by name?
What if I wanted to make a shader compiler tool for Metro?
What is I wanted to make an Art application that allowed you to dynamically generate complex brushes that requires shader generation. But this doesn't work.
Does Windows(Desktop), OSX, Linux, iOS or Android have these shader limitations?
No, so why on earth does Metro?

See http://social.msdn.microsoft.com/Forums/en-US/wingameswithdirectx/thread/9ae33f2c-791a-4a5f-b562-8700a4ab1926 for some discussion about it.
There is not an official position explaining why they made such restriction, but it is very similar to the whole restriction of dynamic code execution on WinRT. So your application scenario is unfortunately not possible.
Though, It would be feasible to hack/patch d3dcompiler_xx.dll and redirect all dll imports to call another DLL that would use authorized only APIs, but that's quite some work, and It is not even sure that this is legal (even by extracting the dll code from original d3dcompiler and rebuilding a new dll).
Another option for your scenario is to send the shader over the internet to a server that compiles and returns bytecode and reflection info... far from being perfect.
Among the platforms you mention, probably iOS is the one that could have the same restriction (I don't develop on this platform, so I can't confirm it).

Related

QTP Writing test on Win32 app ObjectSpy not finding object id

I am experienced writing automation tests for web apps using Selenium.
However I now have to automate a Windows Desktop app which I'm new to.
I'm using QTP 11 (old version) and I can get QTP to login type username/password to the desktop app. However when the app loads there are icons like a Windows desktop. I tried using ObjectSpy on the Actions folder icon but it can't find the object ID and it thinks the icon is a WinObject("COMPOSITE")
Also tried using QTP Record feature but the code that it generates uses hardcoded x and y values. I don't want to use x,y values as if the Actions icon moves 3cms left or right in future the test will fail.
e.g.
Window("Loan IQ").WinObject("COMPOSITE").Click 369,33
Need help finding the object ID in a Win32 app. Thanks
First of all you should make sure that UFT is configured to test your application. In the Record and Run Settings dialog, make sure that either _any windows application__ is selected or your app is explicitly listed.
If this doesn't improve the situation you can try using image based testing (aka Insight).
WIN32 Apps can be a nightmare to automate especially with QTP 11, as it is a kinda outdated version. If you want to get stable automation I propose the following:
Upgrade to a newer version of UFT (14+)
This will most probably not help you indentify the objects but will have a lot of new technologies supported that may help you as described in the following steps
Use Image Based Recognition
Even if your screen resolution changes UFT is still able to identify pictures.IT does not use absolute vectors to compare bitmaps but a different technology which I won't go in detailed (long story short, screen resolution changes are okay)
Provide support for your Widgets
Microsoft has 2 frameworks that can be used to provide UI Automation capabilities (initially for people with accessibility needs, but now is used for RPA and GUI Testing). UFT supports the MSAA and UIA frameworks of Microsoft so if your company is ready to implement support for the UI widgets via one of these Technologies, you are on your way for a smooth Test Automation Experience. Please note: This is mostly a huge investment, so if the tool is something internal and not planned for longer term usage, go with the image based Recognition

PlayN - Managing Common Code / Native Code

I am thinking about using PlayN to manage "common code" in Java and use PlayN to generate iOS, Android, and HTML native versions of the common code.
I figure I could then use the playn-generated native code and link with actual platform specific code (such as UI).
In other words,
Common Code libs in Java-> PlayN -> Native Commond Code Libs -> Link with Native App
Is the use of Play for the above workflow/pipeline appropriate? Any challenges?
Thank-you...
Firstly, you have to specify what you mean by "native" code for the different platforms.
On Android, your java files are specifically compiled/prepared for dalvik. So they are already "native" of a fashion, no work needed to be done here. If you want to get C/C++ native code for Android using the NDK, you're out of luck. PlayN doesn't do this and this is a hard problem (going from Java to C++)
If you take a look at the Maven modular layout of how PlayN is intended to be used, it isn't difficult to define a Factory interface in the common code and pass in a platform specific implementation for each module. It's no big deal to support Android specific functionality this way.
For the HTML version, you can use HTML libraries no problem using JNI, although really garnering specific functionality of the browser I'd imagined of limited value compared to what PlayN has already exposed. The one thing that is useful is text/keyboard input, although I'd recommend triplePlay https://github.com/threerings/tripleplay UI library as they've solved this, and it's an active project.
As for iOS, this might be more complicated as the iOS module is a bit of a hack where the compiled Java classes are run through an JVM runtime for .net (IKVM) and then uses the Monotouch tools to compile the whole thing to native code for iOS. See https://github.com/samskivert/ikvm-monotouch
So for iOS, you won't be able to bind the code to any form of native version, and what you have access to via the toolchain method depends very much on what Monotouch has catered for iOS (quite a lot I imagine), and also what IKVM-Monotouch has supported (I imagine the bare minimum to get PlayN working).
I'm not familiar enough with the Flash pipeline to give you an appraisal, although I think that it's quite flexible.
The above answer is written assuming your app is actually a game. If it is not and you intend to use the standard widget libraries for various platforms on mass, it should be possible. Choosing a good MVP framework would be good here, and depending on the assumptions it makes on different host environments will determine how easy the whole thing will be.
I'd recommend reading and comparing https://developers.google.com/web-toolkit/articles/mvp-architecture and perhaps look at questions like What is your favorite GWT MVP Framework?
...although a lot of these frameworks might be GWT specific and not really have catered to being reused on other platforms.

Google Native Client - How to protect the source code?

With Google Native Client, can the source code be protected so that, unlike JavaScript, it is not visible in the client?
If so, how? Thanks!
As the name says, Google Native Client uses native code.
That means, your code is compiled, just like with your average executable binary on the desktop. It can be disassembled, but the source code can't be recovered.
Native client means that you are running native code on the client. In most cases, you'll be running i386 or amd64 machine language on your client. If you're using a compiled language, then your users cannot directly recover it. Users could disassemble your software to recover some information about your code, but they cannot recover the original source code (unless it is assembly language). Rewriting a piece of software from the disassembled binary is difficult, but given enough time, it can usually be done. It really depends on how paranoid you are about the people using your code.
Native Client's structural requirements to enable reliable disassembly so that it can perform static analysis can make some techniques for code obfuscation unusable. These are often the same techniques used by malware to make malware analysis difficult, i.e., have two valid interpretations of the instruction stream if decoded by different offsets. Native Client does, however, permit a form of self-modifying code since it has JIT support. Mono uses just-in-time code generation, for example, and the same interfaces can be used to create obfuscated code, as long as the JIT'ted code continue to conform to the NaCl security requirements.
Using the JIT interface would of course make your code non-portable to other CPU architectures.

Ubiquitous framework to target multiple smartphone OS?

Is there already a ubiquitous/general framework to target multiple smartphone OS, i.e. like a QT for Android/iPhone/Symbian? Or would be technically too hard to write such a framework?
Technically it would be pretty much impossible (at least very difficult).
The first problem is that the mentioned platforms don't share a common language, so you wouldn't be able to directly share source code. Second is that your abstraction layer would have to be so big that it would probably kill performance.
The closest thing that I'm aware of is something like OpenGL es (you can almost copy & paste OpenGL code across platforms).
A more realistic option is targeting the web layer with an HTML5 application.
Phonegap if it fits your needs. Packages a web app with limited access to device services as an installable.

When should one use a project reference opposed to a binary reference?

My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short

Resources