XML Serializiation migration to MySql - asp.net

I have an ASP.NET project that uses XML Serialization for the main operation for saving data. This project was to stay small relative to size of data. However, the amount of data has ballooned as it always will and now I'm consider moving to a SQL based alternative for managing the data.
For now I have multiple objects defined that are simply storage classes for saving my data for the project to work.
public class Customer
{
public Customer() { }
public string Name { get; set; }
public string PhoneNumber { get; set; }
}
public class Order
{
public Order() { }
public int ID { get; set; }
public Date OrderDate { get; set; }
public string Product { get; set; }
}
Something along these lines although not so rudimentary. Migrating to SQL seems to be a no-brainer and I've landed on using MySql because of the free availability of the service. What I'm running into is that the only way I can see to do this now is to have a solution where there is a storage class, Order, and a class built to Load/Save the data, OrderIO.
The project relies heavily on using List<> to populate the data fields on the page. I'm not using any built-in .NET controls such as DataGrid to assist in displaying the data. Simple TextBox or ComboBox controls that are populated on Page_Load.
I'm aware it would make better sense to pick a way in which the data fields could bind to the SQL through a Repeater but I'm not looking at a full redesign, just a difference on the infrastructure to manage the data.
I would like to be able to create a class that can return an object similar to what I'm dealing with now, such as List<>, from the SQL statements I'm executing. I'm having some trouble getting started on the best method of approach.
Any suggestions on how best to Load/Save this data using SQL or some tutorials on ideas using the .NET framework would be helpful. This is quite a generalized question but I'm open to most ideas. Thanks.

What you need is a Data Access Layer (DAL) that takes care of running the SQL code and returning the required data in the List<> format that you require. I would definitely recommend you read the two series of articles by Imar Spaanjar on Building a N-Layer Application. Note that there are two sets of series, but I linked to the second set, because it contains links to the first one.
Also, it might be beneficial to know that Sql Server 2008 R2 express edition is free to use, but has a limit of 10 GB per database. I am not saying that you shouldn't use MySQL, but just wanted to inform you in case you didn't know that there is a free edition of Sql Server available.

Related

Is there a need for custom Domain Models when using LINQ?

I'm working on an MVC application using LINQ-SQL to connect to my SQL Server database.
Currently when fetching data, I'm passing the properties of my LINQ objects over to a Domain Model, which I'm then creating properties of in my View Models.
For example my View Model might have the following properties:
public Models.UserModel user { get; set; }
public List<Models.CountryModel> countries { get; set; }
My Domain Models have exactly the same properties as my LINQ objects, and I copy these properties over like:
Models.UserModel user = new Models.UserModel();
user.Username = User.Username;
user.FirstName = User.FirstName;
user.LastName = User.LastName;
Where user is my Models.UserModel object, and User is my LINQ object mapped from the User database table.
As my Domain Model is exactly the same as my LINQ object, is there any advantage for me transferring this data over to a Domain Model, or would it be okay for me to just use LINQ objects in my View Model such as:
public User user { get; set; }
public List<Country> countries { get; set; }
What are the advantages of using a Domain Model? Is this purely to loosely couple with the database LINQ objects?
If there are advantages to using Domain Models, how would be best to structure these within my MVC application?
Should they be split off entirely at the "Models" folder level (for example a sub-folder "DomainModels" and "ViewModels") or coincide (such as "UserEditViewModel.cs" and "UserDomainModel.cs").
As my Domain Model is exactly the same as my LINQ object, is there any
advantage for me transferring this data over to a Domain Model, or
would it be okay for me to just use LINQ objects in my View Model such
as:
You could reference domain models in your view models in case they are exactly the same. I don't see any benefit of duplicating this logic except that in the real world they are never the same. You always have some view specific things such as validation or even display labels. The advantage of having pure view models is that your application is no longer tied to your database structure. You could easily flip/flop data access technologies without modifying the UI part. I find it more maintainable to have this clear separation and always tend to make it in my applications.

ASP.NET Visual Studio 2012 using existing SQL Database

I am using Visual Studio 2012 to create an ASP.NET web application. I tried using the Getting Starting with ASP.NET 4.5 Tutorial but it creates a simple local database and all queries are written directly in the code. The database that I am accessing (SQL Server 2008) has fifteen complex stored procedures that I really don't want to have to retype.
Using the DBContext example in the tutorial works fine when just grabbing all of the data from the tables, but how do I use the stored procedures that are in the database? Can someone please tell me the best way to use the stored procedures that already exist?
All of the questions (and answers) I've found so far are dealing with earlier versions of Visual Studio, and although I know that I could use these (since VS 2012 does support the backward compatibility), I want to make the best use of the software that I have and not use "best practices" from VS 2010.
If you can tell me how to use the existing stored procedures, or even direct me to a book, website, or anything else that would show this to me, I would TRULY appreciate it! Happy coding! And thanks for your time!
I had to piece together info from a few different sources, but I was able to get this working. Here are my steps (I may have done more than I needed, but it worked) just in case it may help someone else:
Added a class to my project Called MyContext, and told it was a DBContext and pointed it to my existing database like this: (Please note that I did already have the full Connection String in my webconfig file under .
{
public class MyContext : DbContext
{
public MyContext()
: base("name=MyDatabase")
{
}
}
}
MyDatabase is replaced with the actual name that refers to my database in the connection string of my webconfig file.
I created a class called ProductList which exposes only the fields that are returned by my stored procedure:
{
public class ProductList
{
[ScaffoldColumn(false)]
public int ProductID { get; set; }
[Required, StringLength(100), Display(Name = "Model")]
public string ProductName { get; set; }
}
}
I actually created a user control (.ascx) which is a just a DataReapeater. In the code-behind on the control, right beneather the Page_Load, I created a Method called GetMostPopular:
public IEnumerable GetMostPopular()
{
var db = new MyContext();
IEnumerable result = db.Database.SqlQuery("MostPopularProducts");
return result;
}
Inside the <> of IEnumerable and SQLQuery is the name of the Class I created. It as the exact same fields that will be returned by my stored procedure "MostPopularProducts".
Then in the DataRepeater, I used GetMostPopular as the "SelectMethod":
'><%# Eval("ProductName") %>
5.Then I just dragged and dropped the User Control onto the page where I wanted it to display.
I hope this helps someone else. Happy coding!
I suggest you use the SqlClient, SqlCommand, SqlConnection...
very easy, with organic output!

Entity Framework - Including tables not mapped in data model?

I think this question is probably fairly simple, but I've been searching around and haven't been able to find what I'm looking for.
My team and I are adding a new module to our existing web application. We already have an existing data model which is hooked up to our sql db, and it's pretty huge... So for the new module I created a new EF data model directly from our database with the new tables for the new module. These new tables reference some of our existing tables via foreign keys, but when i add those tables, all of the foreign keys need to be mapped for that table, and their tables, and their tables... and it seems like a huge mess.
My question is, instead of adding the old tables to the data model, since I'm only referencing the ID's of our existing tables for Foreign key purposes can I just do a .Includes("old table") somewhere in the DataContext class or should I go back and add those tables to the model and remove all of their relationships? Or maybe some other method I'm not even aware of?
Sorry for the lack of code, this is more of a logic issue rather than a specific syntax issue.
Simple answer is no. You cannot include entity which is not part of your model (= is not mapped in your EDMX used by your current context).
More complex answer is: in some very special case you can but it requires big changes to your development process and the way how you work with EF and EDMX. Are you ready to maintain all EDMX files manually as XML? In such case EF offers a way to reference whole conceptual model in another one and use one way relations from the new model to the old model. It is a cheat because you will have multiple conceptual models (CSDL) but single mapping file (MSL), single storage description (SSDL) and single context using all of them. Check this article for an example.
I'm not aware that you can use Include to reference tables outside of the EF diagram. To start working with EF then you only need to include a portion of the database in - if your first project is working with a discrete functional area which it probably would be. This might get round the alarming mess when you import and entire legacy database. It scared me when I tried to do it.
In our similar situation - a big legacy system that used stored procedures, we only added the tables that we were directly working at that time. Later on you can always add in additional tables as and when you require them. Don't worry about foreign keys in the EF diagram that are referencing tables that aren't included. Entity Framework happily copes with this.
It does mean running two business layers though one for entity framework and one for the old style data access. Not a problem for us though. In fact from what I've read about legacy system programming it's probably the way to go - you have a business layer with your scruffy old stuff and a business layer with your sparkly new stuff. Keep moving from old to the new until one day the old business layer evaporates into nothing.
You have to use [Include()] over the member.
For example:
// This class allows you to attach custom attributes to properties
// of the Frame class.
//
// For example, the following marks the Xyz property as a
// required property and specifies the format for valid values:
// [Required]
// [RegularExpression("[A-Z][A-Za-z0-9]*")]
// [StringLength(32)]
// public string Xyz { get; set; }
internal sealed class FrameMetadata
{
// Metadata classes are not meant to be instantiated.
private FrameMetadata()
{
}
[Include()]
public EntityCollection<EventFrame> EventFrames { get; set; }
public Nullable<int> Height { get; set; }
public Guid ID { get; set; }
public Layout Layout { get; set; }
public Nullable<Guid> LayoutID { get; set; }
public Nullable<int> Left { get; set; }
public string Name { get; set; }
public Nullable<int> Top { get; set; }
public Nullable<int> Width { get; set; }
}
}
And the LINQ should have
.Includes("BaseTable.IncludedTable")
syntax.
And for the entities which are not part of your model you have to create some view classes.

Best Way to Write an Asp.Net Web Service To Play Well In the Wild

I am writing an API for my ASP.NET application that other developers will use. The API will basically return a list of people with their first name, last name, and id. There are lots of ways to write web services in ASP.NET, the easiest probably being create a web service function (asmx) that returns a DataTable. This is simple enough for other .NET developers to deal with, but I am not convinced that this is the best way to write a web service for general platform and language independence.
What is the currently accepted standard to write a web service like this that plays well in the wild today?
Some ideas that come to mind from experience:
Use WCF, not .asmx. WCF does all the same things that ASMX files do, and is generally the replacement for ASMX services (see here and here).
Write methods using simple POCO data types, like List<Person> rather than DataTable. Basic types serialize more easily and will make more sense in other programming environments since you want your service to be language independent.
Provide generic CRUD methods for managing data. Depending on how your service will be consumed, if the user needs to modify data, a simple method is to provide getBlah(), updateBlah(obj newObj), deleteBlah(obj objToDelete), etc. that use the same data types.
Hide the details that the service consumer doesn't need to know, rather than just blindly exposing all of your data types, structures, and field names as-is. This will make your service more robust for handling internal changes, and you can simplify and control what the end-users see. For instance, if you have a Person class with 30 properties, and only 5 are relevant to the end-user, provide a class that interfaces between Person and a PersonSimple class which is exposed. Without this layer, your end-users will have to modify your code every time you change your data structure, and you will be locked down by this tight coupling.
If security is important
Execute your service over SSL. This protects data transfered over the wire from being sniffed.
Use authentication, either with a Login method and session, or SOAP headers. Services by default are anonymous unless there is some sort of authentication scheme. Even if you think nobody will find your service because you only provide the URL to your users, it will get out somehow, somewhere, and people will try to misuse the service when it does. Plus, you can control who can do what by different logins and authorization schemes.
I am currently working on a similar issue: A web api service in .NET that receives data tables as input parameters, apply some operations on them (using Table Valued Functions), and return some output data tables.
In your case, you don't need to use a complex class like DataTable; you could use an array (List<>) of a simple class with fields like first name, last name and id. Using Web Api of ASP.NET you could do something like the following:
1) Create a new WebApi project in Visual Studio: For example (in VS 2012) C# > Web > ASP.NET MVC 4 Web Application > select "Wep Api" as project template
You will see a VS project with lots of folders, including one named Models
For help see: http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/tutorial-your-first-web-api
2) Create a new model code file Person.cs with a class like the following:
public class Person
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string[] Friends { get; set; }
}
3) Create e new controller code file PersonController.cs with methods for getting, inserting and updating records of the database. All the necessary serialization/deserialization (JSON and XML) and data binding is done automatically by the Web Api environment set by the project template.
// Get all the records of persons
public IList<Person> Get()
{
// read database into a list of persons (List<Person>)
// return List<Person>
}
Return record of a selected person:
public Person Get(int id)
{
// read database for a selected person
}
Parameter binding (reading a JSON/XML content sent by http POST into an object, or into a list objects) is also done automatically, as easy as the following:
// parameter binding: Create a Person object with content from XML/JSON
public void ReadPerson(Person p)
{
Trace.WriteLine(Person.Id);
}
public void ReadPersonList(List<Person> plist)
{
Trace.WriteLine(plist.Count);
}

Thoughts on writing a "flexible" API?

I may have the wrong "pattern" here, but I think it's a fair topic.
I have an ASP.Net MVC application in which it calls out to a WCF service to get back the ViewModels that will be rendered. (the reason it's using a WCF service is so that other small MVC apps may also call on for these ViewModels...only internally, it's not a publicly available thing so I can change anything either side of the service. The idea is to move the logic that was in the website, closer to the server/database so the roundtrips aren't so costly - and only do one roundtrip overall from the webserver to the database server).
I'm trying to work out the best thing to return these "ViewModels" in from the service. There are lots of common little bits of functionality, but each page may want to display different subsets of these things (so homepage maybe a list of tables, next page, a list of tables and users that are available).
So what's the best way of returning the information that the page wants, hopefully without the webservice knowing about the page?
Edit:
It's been suggested below that I move the logic in process. This would be a lot faster, except that's what we're moving away from because it is actually a lot slower (in this case). The reason for this is that the database is on one server, and the webapp is on another server, and the webapp is particularly chatty at points (there are pages it could end up doing 2K round trips - (I have no control over reducing this number before that's suggested)), so moving the logic closer to the db is the next best way of making it more performant.
I would look at creating a ViewModel per each MVC app/view. The service could just return the maximum amount of data for the "view" in a logical sense and each MVC app uses the information it wants when composing the ViewModel for it's view.
Your service is then only responsible for one thing, returning data specific to a view's function. The controller of each app is responsible for using/not using pieces of the returned data.
This will be more flexible as your ViewModels may require different validation rules as well. ViewModels also have MVC-specific needs(SelectList etc..) that shouldn't really be returned by a service layer. It seems like something can be shared at a glance, but there are generally lots of small differences that make sharing ViewModels a bad idea.
class MyServiceViewResult
{
public int SomethingEveryViewNeeds { get; set; }
public bool OnlyOneViewMightNeedThis { get; set; }
}
class ViewModel1
{
public int IdProperty { get; set; }
public ViewModel1(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
}
}
class ViewModel2
{
public int IdProperty { get; set; }
public bool IsAllowed { get; set; }
public ViewModel2(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
IsAllowed = result.OnlyOneViewMightNeedThis;
}
}
Instead of having a web service, why don't you just implement the service as a reusable library that encapsulates the desired functionality?
This will also allow you to use polymorphism to implement customizations. WCF doesn't support polymorphism in a flexible way...
Using an in-proc service will also be a lot faster.
See this related question for outlines of a polymorphic solution: Is this a typical use case for IOC?

Resources