Does anybody have samples for referencing a Code First DbContext from a T4 template?
I have found some T4 examples that use a .dbml as source and also ones that reference a database. I would like to loop through and build javascript files for all the classes in the context. I am having a hard time figuring out how to reference the EnvDTE variable to get the DbContext. From there I will convert to an ObjectContext and loop through the classes to generate the code.
Any ideas or examples?
Information wise you could compile the Code First containing assembly down to .dll and then in T4 process load the DLL and read the data from it through reflection.
We did this kind of approach in a process where we had means to get database => serialization classes, but could not interfere that phase with T4; only after the code generation was done to serialization classes. And in that case it was easier to compile it and then read it through reflection.
If you go with this approach you have to deal with the fact that you need to build part of the application first, then use T4 generation against that to get the remaining done. In case you're creating Javascript files, it might make it easier.
You can make your reflection based T4 as "preprocessed", so that you can run it in post-build script.
EDIT: Added seminar presented case demonstration.
http://abstractiondev.wordpress.com/2012/03/09/microsoft-techdays-2012-finland-adm-materials/
Download the demonstration from Github, and look at the "T4 Demos.sln" - solution, Advanced7.tt demonstration. It parses types and properties from the given assembly name.
Related
I am new to ASP.Net MVC. I have a couple of controllers and models. They all use a set of static functions and constants which I call common code.
In my MVC project I have folders for Controller, models and view etc,
Where is all the common code supposed to be put ?
Is is OK to create a Common folder and create new class for my static functions and same for global constants ?
If you reuse this common code often across solutions, you might want to consider compiling it into its own class library and simply referencing the assembly.
Another thing you'll want to consider is the nature of the common functions. Are they truly just helper functions (like manipulating strings and stuff like that) or do they make more sense mixed into your business layers?
Basic rule is to keep it organized be consistent. There's no right or wrong way to structure your application...only hundreds of thousands of opinions.
Exactly you can create Helper folder when you set your extension methods or another common utility.
But for constants suggest you to create Ressource File
Remarks : All text , warning or info messages, put theses elements in ressource and don't write in code, for gloabalization need(It's my case on project)
I'm working in a tool that is supposed to generate some Java Code to accelerate part of the development based in a swing input dialog...there is no need to get any further with it so I'm going to my problem...
I need to retrieve all the attributes from a class to check whenever it is necessary to add a new one. I tried to use reflection but things started getting complicated. In order to use reflection I need to compile the class I want to get the attributes as it does not work directly from .java file, .class is required for it.
The problem is that many of the classes has a lot of dependencies! Due to some design flaws some classes are a high coupled, so if I am supposed to dynamic use a class loader to compile a class A I would have to retrieve and compile all its dependencies! And then retrieve all the possible dependencies from the class A dependency classes!
I made a test running an existing ant file to compile to whole project instead of the above approach but it takes about 9 minutes to finish! From the final user perspective waiting 9 minutes every run is not accetable!
Does any one here knows a better solution???
If you want to avoid working with reflection and bytecode, it means that you will have to parse the .java files yourself with a grammar and, well, a parser based on this grammar. It is possible (especially if you do not implement the whole grammar, because many java features might be useless in your project perimeter), but I reckon this is no easy task.
There is an Apache commons Sandbox package called ClassScan. It is capable of doing the kind of source parsing you appear to require. http://commons.apache.org/sandbox/commons-classscan/. Note that it is in the Sandbox, so not part of the Commons Proper.
When migrating an Entity Framework 5.0 based project from code-first to database-first. We'll be using the standard VS2012 wizard to generate the edmx models off the database but do we have any additional steps beyond that? I presume I'd have to delete all classes where I've defined the code-first models and even the migrations folder - any other cleanup operations beyond those two?
[Edit]:
Reporting back.
So the actual experience was between my original expectations and what Ladislav mentioned (like he said, exact conditions are code dependant). For me the entire operation took about 15-20 minutes mostly involving
Branch creation (safety, in case stuff blows up!)
Removal of Code-first classes and source (I moved them outside VS2012 for a diff reference)
Creation of the EDMX model from the database (passing same namespace etc to the wizard to reduce the extent of differences)
Quick inspection of the code-first classes and the auto-generated db-first classes. This was mostly 1:1 since we used sensible names in the code first model to begin with.
Compiling and fixing each error one by one
Noticed many errors were due to automatic pluralization of EF 5 vs my own pluralization of different fields. Fixed 40+ errors via a quick case sensitive search-> replace all
Rerun all the tests after compiling
Merge feature branch back in!
Thanks
This may be quite complex if your entity or context classes contains any additional logic. You must:
Add EDMX model to your project and let it generate entity classes and context class
Remove migrations
If your original context class contains any additional code you must convert it to partial class with the same name, namespace and assembly as newly generated context class with only that additional code.
If any of your entity classes contained any additional code you must follow the same procedure as with the context.
If the additional code was actually called from inside of the code which is now auto generated you must make some other changes which may include changing code generation template and using partial methods.
You must remove any database initialization or migration execution from your bootstrapper
You must use the new connection string referencing metadata files
Another more complex option is not using auto generation feature and use your old context and entity classes but in such case you will have to keep them in sync with your EDMX manually.
Maybe there's something obvious that I'm missing or maybe not. Suppose I have a class that is just a representation with getters/setters and no logic. I'm going to use these structures for serialization/deserialization mostly. Suppose I use that object in many, many applications. Suppose I have dozens of these objects. What's my best approach to sharing these objects?
I understand that I can compile an object into a DLL and reference that DLL. But if I have dozens of these objects, do I compile them all separately so I can use just what I need or do I make and maintain a monster DLL with all of these objects in it. Both of those approaches seem bad. I don't want to create a class library for every single class (that's stupid) and throwing them into a giant package just seems like a bad idea.
Am I missing something simple? Doesn't java have a convention where one can create jar files of one to many classes? Does .Net do something like that?
You need a happy middle ground.
You should be grouping related objects into individual namespaces.
You can then compile each namespace into a seperate DLL. That way, whoever is using the libraries only needs to reference a single DLL per group of functionality.
You can have a master assembly containing all objects. Then also create separate assemblies for the different applications where you only add the ones you use as links.
You would then use Project->Add Existing Item, and then on the Add-button click the down-arrow and select "Add As Link" when you add the classes you want.
I've just finished going through the MvcMusicStore tutorial found here. It's an excellent tutorial with working source code. One of my favorite MVC v2 tutorials so far.
That tutorial is my first introduction to using ADO.NET Entity Framework and I must admit that most of it was really quick and straight-forward. However, I am worried about maintainability. How customizable is this framework when the customer requests additional features to their site that require new fields, tables and relationships?
I am very concerned about not being able to efficiently execute customer's change orders because the Entity models are basically drag-and-drop, computer generated code. My experience with code generators is not good. What if something goes haywire in the guts of the model and I'm unable to put humpty-dumpty back together?
In the long run, I wonder if using hand typed models which human-beings can read and edit is a more efficient course than using Entity Framework.
Has anyone worked enough with entity framework to say that they are comfortable using it in a very fluid development environment?
I have been using entity framework(V1.0) for about a year in my current project. We have 100s of tables,all added to the edmx.
Problems we face (though not sure if the new entity framework resolves these issues)
When you are used to VS.net IDE, you
will be used to doing all drag/drop
operations from your IDE. The
problem is, once your edmx hosts
100s of tables,the IDE really stalls
and you would have to wait for 3-4
minutes before it becomes responsive
With so many tables ,any edits you
do on the edmx take long.
When you are going to use a version
control, comparing 10000 line XML is
quite painful. Think about merging 2
branches each having a 10000 line
edmx,the tables, new association
between tables, deleted associations
and going back and forth comparing
xmls. You would need a good xml
comparison tool if you are serious
about merging 2 big edmx files
For performance reasons we had to
make the csdl,msl and ssdl as
embedded resources
Your edmx should have to be in sync
with your DB all the time,or at
least, when you try to update the
edmx, it will try to sync and might
throw some obscure errors if they
are out of sync.
Be aware that your
entities(tables/views) should always
have a primary key, else you will
get obscure errors. See my other
question here
Things We did/I might consider in the future when using EF
Use multiple edmx by using 1 edmx
for tables logically grouped/linked
together. Be aware of the fact that
if you do this, each edmx should
live in its own namespace. If you
try to add 2 related tables(say
person & address) to 2 edmx in the
same namespace, you will get a
compiler error stating that the
foreign key relationship is already
defined. (Tip: create a folder and
create the edmx under this folder.
If you try to alter the namespace in
the edmx without having the folder,
it does not save properly the
namespace the next time you
open/edit it)
fewer tables in edmx => less heavy
container => good
fewer tables in edmx=> easier to
merge when merging 2 branches
Be aware of the fact that object
context is not thread safe
Your repository (or what ever DAO you use) should be responsible for creating and disposing the container it creates. Using DI frameworks, especially in a web app complicated things for us. Web requests are served from the threadpool and the container were not disposed properly after the web request was served as the thread itself was not disposed. The container got reused (when the thread was reused) and created a lot of concurrency issues
Don't trust your VS IDE. Get a good
XML editor and know how to edit the
edmx file (though you don't need to
edit the designer). Get your hands dirty
ALWAYS ALWAYS ALWAYS (just cannot emphasize this enough) run a
SQL profiler (and I mean each and
every step of your code) when you
execute your queries. As complex as
the query might look, you will be
surprised to find how many times you
hit the DB Example:(sorry, unable to
get code to the right format,can
someone format it ?)
var myOrders = from t in context.Table where t.CustomerID=123
select t; //above query not yet
executed
if(myOrders.Count>0)//DB query to
find count {
var firstOrder = myOrders.First()//DB query to get
first result
}
Better approach
// query materialized, just 1 hit to
DB as we are using ToList() var
myOrders = (from t in Context.tables
where t.customerID=123 select
t).ToList();
if(myOrders.Count>0)//no DB hit
{
//do something
var myOrder = myOrders[0];//no DB hit
}
Know when to use tracking and no
tracking(for read-only) and web apps
do a lot of reads than writes. Set
them properly when you initialize
your container
Did I forget compiled queries ? Look
here for more goodies
When getting 1000s of rows back from
your DB, make sure you use IQueryable and detach the
objectContext so that you don't
run out of memory
Update:
Julie Lerman address the same problem with a similar solution. Her post also points to Ward's work on dealing with huge number of tables
I'm not too familiar with Entity Framework, but I believe it simply generates an EDM file which can be hand-edited. I know I've done this quite frequently with the Linq-to-SQL DBML files that the designer generates (it's often faster to hand-edit them than use the designer for small tweaks).
You know I'd be interested if any developers can provide some insight into this.
Any Entity Framework examples seem to only consist of about ten to twenty tables, which is small scale really.
What about using the EF on a database with hundreds or even a thousand tables?
Personally, I know several developers and organisations that were burned by LINQ-to-SQL and are holding off for a year or so to see what direction EF takes.
Starting from Entity Framework 4 (with Visual Studio 2010), the generated code is outputted from T4 (Text Template Transformation Toolkit) files which you can edit so you have full control over what is generated. See Oleg Sych's blog which is a mine of information about T4. Code generation is not a problem and T4 opens so many perspectives that I can't live without anymore.
I'm currently working on a project where we use Entity Framework 4 for the data access layer, and Scrum as the agile project management method. From one sprint to another, there are several tables added, other modified, new requirements added. When you have run once into each potential EF problem (like knowing that default values from database are not persisted by default in the .edmx file, or changing a nullable column to a non-nullable and updating the designer doesn't change the mapped property state), you're good to go.
Edit: to answer your question, it's EF 4 whose code generation is based on T4 rather than T4 supporting EF. On EF 3.5 (or EF 1.0 if you prefer), you could in theory use T4 by writing them from scratch, parsing the EDMX file in the T4 code and generate your entities. It would be quite a lot of work considering all of this is already done by EF 4. Plus, Entity Framework 3.5 only supports one type of entitiy, whereas EF 4 as built in or downloadable templates for POCO entities (that don't know anything about persistence), Self-Tracking Entities...
Considering Entity Framework itself, I think it was lacking many features in its first release, and while usable, was quite frustrating to use. EF4 is much more improved. It still lacks some basic features (like enum support), but it has become my data access layer of choice now.