ColdFusion CFC implementation of C# Partial Class? - asp.net

Does ColdFusion offer a mechanism for splitting CFCs into multiple files? I am NOT talking about extension, I am talking about splitting the SAME CFC into multiple files; the same way C# allows for "partial" classes. The reason for this is because I am using T4 to generate a bunch of CFCs and I want to be able to tag functionality onto the generated CFC by doing so in another file. I want to do this in a way that doesn't violate the Open-Closed Principle.

<cfinclude> will work as far as basic functionality is concerned, however the following will not work correctly:
Metadata functions - only the functions within the inspect CFC will be shown - not the cfincluded functions
<cfajaxproxy cfc="your.cfc"> will not allow the included methods to be called through javascript
Calling included functions as a web service call will not be possible
(all these cases basically boild down to the metadata only seeing the functions etc. which are within the base cfc)
It might be worth you googling Coldfusion Mixins for more detail on this and other related techniques.

No, sorry. A limitation of the language, I'm afraid. A CFC is a single file.
I mean, of course, you could bastardize it somehow. You could have fragments that are wrapped in a cfcomponent tag as part of some kind of build process, but I'm pretty sure that's not what you're looking for here.

Related

Generating files to multiple paths with Swagger Codegen?

I'm creating a template for our server-side codegen implementation, but I ran into an issue for a feature request...
The developers who are going to use the generated base want the following pattern (the generator is based on the dotnetcore):
Controllers
v{apiVersion}
{endpoint}ApiController : Controller, I{endpoint}Api
Interfaces
v{apiVersion}
I{endpoint}Api
I{endpoint}DataProvider
DataProviders
-v{apiVersion}
-{endpoint}DataProvider : I{endpoint}DataProvider
Both interfaces are the same, describing the endpoints. The DataProvider implementation will allow us to use DI to hot-swap the actual data provider/business logic layer during runtime.
The generated ApiControllers will refer to the IDataProviders, and use the actual implementation (the currently active one, that is). For that we're going to use dotnetcore's built-in dependency injection system.
However I can't seem to find a way to have the operations generator output to three different folders, based on the template. It will all end up jumbled in a single folder, and I will need to manually move them.
Is there a way to solve these requirements, or should I solve it all the time manually?

Where to put common Functions/Constants in ASP.Net MVC

I am new to ASP.Net MVC. I have a couple of controllers and models. They all use a set of static functions and constants which I call common code.
In my MVC project I have folders for Controller, models and view etc,
Where is all the common code supposed to be put ?
Is is OK to create a Common folder and create new class for my static functions and same for global constants ?
If you reuse this common code often across solutions, you might want to consider compiling it into its own class library and simply referencing the assembly.
Another thing you'll want to consider is the nature of the common functions. Are they truly just helper functions (like manipulating strings and stuff like that) or do they make more sense mixed into your business layers?
Basic rule is to keep it organized be consistent. There's no right or wrong way to structure your application...only hundreds of thousands of opinions.
Exactly you can create Helper folder when you set your extension methods or another common utility.
But for constants suggest you to create Ressource File
Remarks : All text , warning or info messages, put theses elements in ressource and don't write in code, for gloabalization need(It's my case on project)

GetGlobalResourceObject or Resources.Resource - what's better?

I have an application that is multilingual. I'm using the out-of-the-box .Net features for this. Each language has its own file in the App_GlobalResources (see iamge below)
In the code behind what is better?
GetGlobalResourceObject("LocalizedText", "ErrorOccured")
Resources.LocalizedText.ErrorOccured
The 2nd one uses less code and it's type safe, it will return an error during compile time and not run time.
alt text http://img340.imageshack.us/img340/5562/langl.gif
These are the advantages of each approach:
Advantages of GetGlobalResourceObject (and GetLocalResourceObject):
You can specify a particular culture instead of using the CurrentCulture.
You can use a late-bound expression (i.e. a string) to decide which resource to load. This is useful if you can't know ahead of time which resource you will need to load.
It works with any resource provider type. For example, it works not only with the built-in default RESX-based provider but it'll work the same against a database-based provider.
Advantages of strongly-typed RESX types:
You get compile-time errors if you access a resource that doesn't exist.
You get Intellisense while working on the project.
So, as with many "which is best" questions, the answer is: It depends! Choose the one that has advantages that will benefit your particular scenarios the most.
So use the second one, if you know up-front what the resource file and key will be.
The GetGlobalResourceObject() method is useful if you don't know what the resource file or (more likely) the key will be at compile time.

Using Web Services in the Flex Mate Framework

I am currently trying to use the "Invocation tags" of Mate to call my web services and delegate the WS-responses to my fault/result handlers.
I want to use the generated proxies, provided by the Flex Builder, and not the plain <WebService> or <WebServiceInvoker> tags.
I actually failed using several techniques:
<WebServiceInvoker> does not work with the generated proxies.
<AsyncMethodInvoker> needs some complicated successType that I could not get to work with the WS-calls. And defining the events seems redundant to me. I want it simple and easy to read, the code will be touched by other people than me!
<MethodInvoker> can't use instances, and it also can't handle the proxies' AsyncToken
<DelegateInvoker> Looked fine at first. It calls the service but doesn't fire valid result events (infinite busy cursor). Even though i can successfully bind to the XYZ_lastResult of the WS-proxies, and a WS-call results in getting valid data from the WS-backend, the <faultHandlers> and <resulthandlers> are not executed. There is some solution for the DelegateInvoker that changes code in the generated proxies, which i definately do not want to do!
So here is my question: Is there a simple(!) way of using default Flexbuilder generated proxies with the Mate Invocation tags?
It appears that your request is not that uncommon to Mate. Check out this couple of threads in their forum:
http://mate.asfusion.com/forums/topic.php?id=424
http://mate.asfusion.com/forums/topic.php?id=421
The solution is to modify some bits of the auto-generated code... which in a way ruins the whole point of using code generation.

What is Reflection?

I am VERY new to ASP.NET. I come from a VB6 / ASP (classic) / SQL Server 2000 background. I am reading a lot about Visual Studio 2008 (have installed it and am poking around). I have read about "reflection" and would like someone to explain, as best as you can to an older developer of the technologies I've written above, what exactly Reflection is and why I would use it... I am having trouble getting my head around that. Thanks!
Reflection is how you can explore the internals of different Types, without normally having access (ie. private, protected, etc members).
It's also used to dynamically load DLL's and get access to types and methods defined in them without statically compiling them into your project.
In a nutshell: Reflection is your toolkit for peeking under the hood of a piece of code.
As to why you would use it, it's generally only used in complex situations, or code analysis. The other common use is for loading precompiled plugins into your project.
Reflection lets you programmatically load an assembly, get a list of all the types in an assembly, get a list of all the properties and methods in these types, etc.
As an example:
myobject.GetType().GetProperty("MyProperty").SetValue(myobject, "wicked!", null)
It allows the internals of an object to be reflected to the outside world (code that is using said objects).
A practical use in statically typed languages like C# (and Java) is to allow invocation of methods/members at runtime via a string (eg the name of the method - perhaps you don't know the name of the method you will use at compile time).
In the context of dynamic languages I haven't heard the term as much (as generally you don't worry about the above), other then perhaps to iterate through a list of methods/members etc...
Reflection is .Net's means to manipulate or extract information of an assembly, class or method at run time. For example, you can create a class at runtime, including it's methods. As stated by monoxide, reflection is used to dynamically load assembly as plugins, or in advance cases, it is used to create .Net compiler targeting .Net, like IronPython.
Updated: You may refer to the topic on metaprogramming and its related topics for more details.
When you build any assembly in .NET (ASP.NET, Windows Forms, Command line, class library etc), a number of meta-data "definition tables" are also created within the assembly storing information about methods, fields and types corresponding to the types, fields and methods you wrote in your code.
The classes in System.Reflection namespace in .NET allow you to enumerate and interate over these tables, providing an "object model" for you to query and access items in these tables.
One common use of Reflection is providing extensibility (plug-ins) to your application. For example, Reflection allows you to load an assembly dynamically from a file path, query its types for a specific useful type (such as an Interface your application can call) and then actually invoke a method on this external assembly.
Custom Attributes also go hand in hand with reflection. For example the NUnit unit testing framework allows you to indicate a testing class and test methods by adding [Test] {TestFixture] attributes to your own code.
However then the NUnit test runner must use Reflection to load your assembly, search for all occurrences of methods that have the test attribute and then actually call your test.
This is simplifying it a lot, however it gives you a good practical example of where Reflection is essential.
Reflection certainly is powerful, however be ware that it allows you to completely disregard the fundamental concept of access modifiers (encapsulation) in object oriented programming.
For example you can easily use it to retrieve a list of Private methods in a class and actually call them. For this reason you need to think carefully about how and where you use it to avoid bypassing encapsulation and very tightly coupling (bad) code.
Reflection is the process of inspecting the metadata of an application. In other words,When reading attributes, you’ve already looked at some of the functionality that reflection
offers. Reflection enables an application to collect information about itself and act on this in-
formation. Reflection is slower than normally executing static code. It can, however, give you
a flexibility that static code can’t provide

Resources