I may have the wrong "pattern" here, but I think it's a fair topic.
I have an ASP.Net MVC application in which it calls out to a WCF service to get back the ViewModels that will be rendered. (the reason it's using a WCF service is so that other small MVC apps may also call on for these ViewModels...only internally, it's not a publicly available thing so I can change anything either side of the service. The idea is to move the logic that was in the website, closer to the server/database so the roundtrips aren't so costly - and only do one roundtrip overall from the webserver to the database server).
I'm trying to work out the best thing to return these "ViewModels" in from the service. There are lots of common little bits of functionality, but each page may want to display different subsets of these things (so homepage maybe a list of tables, next page, a list of tables and users that are available).
So what's the best way of returning the information that the page wants, hopefully without the webservice knowing about the page?
Edit:
It's been suggested below that I move the logic in process. This would be a lot faster, except that's what we're moving away from because it is actually a lot slower (in this case). The reason for this is that the database is on one server, and the webapp is on another server, and the webapp is particularly chatty at points (there are pages it could end up doing 2K round trips - (I have no control over reducing this number before that's suggested)), so moving the logic closer to the db is the next best way of making it more performant.
I would look at creating a ViewModel per each MVC app/view. The service could just return the maximum amount of data for the "view" in a logical sense and each MVC app uses the information it wants when composing the ViewModel for it's view.
Your service is then only responsible for one thing, returning data specific to a view's function. The controller of each app is responsible for using/not using pieces of the returned data.
This will be more flexible as your ViewModels may require different validation rules as well. ViewModels also have MVC-specific needs(SelectList etc..) that shouldn't really be returned by a service layer. It seems like something can be shared at a glance, but there are generally lots of small differences that make sharing ViewModels a bad idea.
class MyServiceViewResult
{
public int SomethingEveryViewNeeds { get; set; }
public bool OnlyOneViewMightNeedThis { get; set; }
}
class ViewModel1
{
public int IdProperty { get; set; }
public ViewModel1(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
}
}
class ViewModel2
{
public int IdProperty { get; set; }
public bool IsAllowed { get; set; }
public ViewModel2(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
IsAllowed = result.OnlyOneViewMightNeedThis;
}
}
Instead of having a web service, why don't you just implement the service as a reusable library that encapsulates the desired functionality?
This will also allow you to use polymorphism to implement customizations. WCF doesn't support polymorphism in a flexible way...
Using an in-proc service will also be a lot faster.
See this related question for outlines of a polymorphic solution: Is this a typical use case for IOC?
Related
I am developing a real time multiplayer game using SignalR. The message delivery logic is not very simple which I could not handle it by using Groups.
For example, one message will be delivered to users with some custom property equals to some dynamic value. It means the target audiance can not be handled by
Clients.All
Clients.AllExcept
I have a mapping class something like this:
public class Player
{
public dynamic Client { get; set; }
public string ConnectionId { get; set; }
public string Name { get; set; }
}
Somehow I do detect all my audiences in a List object.
What is the best way to send message to everyone in the list? Enumerating through list and calling
foreach (var loopPlayer in players)
{
player.Client.sendMessage(message);
}
Or
List<string> ids = new List<string>();
foreach (var loopPlayer in players)
{
ids.Add(item.ConnectionId.ToString());
}
Clients.Clients(ids).sendMessage(message);
My concern about the first one is, it will serialize the message every time. About the second one, I don't know how it is working behind the scene.
The both approach is working but I am concerning about performance and trying to find the best practice. Or which any other approach I might use?
As you said, enumerating over list of clients serialize message each and every time (and store those serialized messages in internal buffers etc etc). If message is same, this is unnecessary CPU\memory overhead.
Clients.Clients(ids) serializes message only once so performance vise, its definitely way to go.
The message delivery logic is not very simple which I could not handle it by using Groups. For example, one message will be delivered to users with some custom property equals to some dynamic value.
Groups work in scaleout scenario out of the box which is huge benefit if you ever find yourself in need to scaleout. So maybe try to find some way to simplify "group assigment logic" even at cost of delivering some messages to more clients and doing "filtering" client side...
I'm looking for a bit of experience and explanation here, given that different sources give different recommendations. I am totally new to MVC. I know this question has been asked before, but I am not (currently) using EF or Linq.
I have a SQL database with many stored procedures. Previously when used with webforms, there was a business layer that contained helper methods for calling the procedures and returning DataSets to the pages. The important part is that the procedures often interrogated about 20 tables; the pages do not simply reflect the database structure exactly (as I see in the majority of MVC tutorials):
SQL database <--> stored procedures <--> business layer <--> web forms
I want to take the best approach here to start on the right footing and learn properly but appreciate there may not be a correct answer. Therefore if you post, could you please offer some explanation as to "why"?
Should stored procedure logic (SQLCommand/business methods etc) go within Model or
Controller?
One post advises neither, but retain the business layer. Another expert advises that
[Models/Entities] should not have any addon methods outside of what's
coming back from the database
If the business layer is retained, where are the methods called from (e.g. Model or Controller)?
If the above answer is "Neither", does that mean the Model part will go unused?
That almost feels that things aren't being done properly, however in this tutorial that appears to be what happens.
Should I plug in the Entity Framework into the Model layer to call the business layer?
That feels like overkill, adding all that additional logic.
Your controllers should gather the information required to build the page the user is currently viewing. That's it.
Controllers should reference classes in your business logic layer.
For example here's your controller. All it does is translate the http request and call the business logic.
public class MyController : Controller
{
private IMyBusinessLogic _businessLogic;
public MyController(IMyBusinessLogic businessLogic)
{
_businessLogic = businessLogic;
}
[HttpPost]
public ActionResult UpdateAllRecords()
{
_businessLogic.UpdateAllRecords();
return Json(new Success());
}
}
And your business logic class
public class MyBusinessLogic : IMyBusinessLogic
{
public void UpdateAllRecords()
{
// call SP here
using(SqlConnection conn = new...
}
}
There are a number of advantages to this:
Your business logic is completely separated from your UI, there's no database code in your presentation layer. This means your controller can focus on it's job and code doesn't get polluted.
You can test your controller and see what happens when your business logic succeeds, throws exceptions etc.
For extra bonus points you should look into creating a data access layer.
public void DataAccess : IDataAccess
{
public void RunStoredProcedure(string spName)
{
}
}
Now you can test that your BLL is calling and processing your SP results correctly!
Expanded following the comment questioning the models:
Ideally your model should have no logic in it at all. It should simply represent the data required to build the page. Your object which you're loading represents the entity in the system, the model represents the data which is displayed on the page. This is often substantially lighter and may contain extra information (such as their address) which aren't present on the main entity but are displayed on the page.
For example
public class Person
{
public int PersonID {get;set;}
public string Firstname {get;set;}
public string Lastname {get;set;}
public Address Address {get;set;}
}
The model only contains the information you want to display:
public class PersonSummaryModel
{
public int PersonID {get;set;}
public string FullName {get;set;}
}
You then pass your model to your view to display it (perhaps in a list of FullNames in this case). Lots of people us a mapper class to convert between these two, some do it in the controller.
For example
public class PersonMapper
{
public PersonSummaryModel Map(Person person)
{
return new PersonSummaryModel
{
PersonID = person.PersonID,
FullName = string.Concat(person.Firstname, " ", person.Lastname)
};
}
}
You can also use some automatic solutions such at AutoMapper to do this step for you.
Your controller should really only be involved with orchestrating view construction. Create a separate class library, called "Data Access Layer" or something less generic, and create a class that handles calling your stored procs, creating objects from the results, etc. There are many opinions on how this should be handled, but perhaps the most
View
|
Controller
|
Business Logic
|
Data Access Layer
|--- SQL (Stored procs)
-Tables
-Views
-etc.
|--- Alternate data sources
-Web services
-Text/XML files
-and son on.
if you feel like learning tiers and best way
MSDN have great article on this link
MSDN
I am writing an API for my ASP.NET application that other developers will use. The API will basically return a list of people with their first name, last name, and id. There are lots of ways to write web services in ASP.NET, the easiest probably being create a web service function (asmx) that returns a DataTable. This is simple enough for other .NET developers to deal with, but I am not convinced that this is the best way to write a web service for general platform and language independence.
What is the currently accepted standard to write a web service like this that plays well in the wild today?
Some ideas that come to mind from experience:
Use WCF, not .asmx. WCF does all the same things that ASMX files do, and is generally the replacement for ASMX services (see here and here).
Write methods using simple POCO data types, like List<Person> rather than DataTable. Basic types serialize more easily and will make more sense in other programming environments since you want your service to be language independent.
Provide generic CRUD methods for managing data. Depending on how your service will be consumed, if the user needs to modify data, a simple method is to provide getBlah(), updateBlah(obj newObj), deleteBlah(obj objToDelete), etc. that use the same data types.
Hide the details that the service consumer doesn't need to know, rather than just blindly exposing all of your data types, structures, and field names as-is. This will make your service more robust for handling internal changes, and you can simplify and control what the end-users see. For instance, if you have a Person class with 30 properties, and only 5 are relevant to the end-user, provide a class that interfaces between Person and a PersonSimple class which is exposed. Without this layer, your end-users will have to modify your code every time you change your data structure, and you will be locked down by this tight coupling.
If security is important
Execute your service over SSL. This protects data transfered over the wire from being sniffed.
Use authentication, either with a Login method and session, or SOAP headers. Services by default are anonymous unless there is some sort of authentication scheme. Even if you think nobody will find your service because you only provide the URL to your users, it will get out somehow, somewhere, and people will try to misuse the service when it does. Plus, you can control who can do what by different logins and authorization schemes.
I am currently working on a similar issue: A web api service in .NET that receives data tables as input parameters, apply some operations on them (using Table Valued Functions), and return some output data tables.
In your case, you don't need to use a complex class like DataTable; you could use an array (List<>) of a simple class with fields like first name, last name and id. Using Web Api of ASP.NET you could do something like the following:
1) Create a new WebApi project in Visual Studio: For example (in VS 2012) C# > Web > ASP.NET MVC 4 Web Application > select "Wep Api" as project template
You will see a VS project with lots of folders, including one named Models
For help see: http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api/tutorial-your-first-web-api
2) Create a new model code file Person.cs with a class like the following:
public class Person
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string[] Friends { get; set; }
}
3) Create e new controller code file PersonController.cs with methods for getting, inserting and updating records of the database. All the necessary serialization/deserialization (JSON and XML) and data binding is done automatically by the Web Api environment set by the project template.
// Get all the records of persons
public IList<Person> Get()
{
// read database into a list of persons (List<Person>)
// return List<Person>
}
Return record of a selected person:
public Person Get(int id)
{
// read database for a selected person
}
Parameter binding (reading a JSON/XML content sent by http POST into an object, or into a list objects) is also done automatically, as easy as the following:
// parameter binding: Create a Person object with content from XML/JSON
public void ReadPerson(Person p)
{
Trace.WriteLine(Person.Id);
}
public void ReadPersonList(List<Person> plist)
{
Trace.WriteLine(plist.Count);
}
I have an ASP.NET project that uses XML Serialization for the main operation for saving data. This project was to stay small relative to size of data. However, the amount of data has ballooned as it always will and now I'm consider moving to a SQL based alternative for managing the data.
For now I have multiple objects defined that are simply storage classes for saving my data for the project to work.
public class Customer
{
public Customer() { }
public string Name { get; set; }
public string PhoneNumber { get; set; }
}
public class Order
{
public Order() { }
public int ID { get; set; }
public Date OrderDate { get; set; }
public string Product { get; set; }
}
Something along these lines although not so rudimentary. Migrating to SQL seems to be a no-brainer and I've landed on using MySql because of the free availability of the service. What I'm running into is that the only way I can see to do this now is to have a solution where there is a storage class, Order, and a class built to Load/Save the data, OrderIO.
The project relies heavily on using List<> to populate the data fields on the page. I'm not using any built-in .NET controls such as DataGrid to assist in displaying the data. Simple TextBox or ComboBox controls that are populated on Page_Load.
I'm aware it would make better sense to pick a way in which the data fields could bind to the SQL through a Repeater but I'm not looking at a full redesign, just a difference on the infrastructure to manage the data.
I would like to be able to create a class that can return an object similar to what I'm dealing with now, such as List<>, from the SQL statements I'm executing. I'm having some trouble getting started on the best method of approach.
Any suggestions on how best to Load/Save this data using SQL or some tutorials on ideas using the .NET framework would be helpful. This is quite a generalized question but I'm open to most ideas. Thanks.
What you need is a Data Access Layer (DAL) that takes care of running the SQL code and returning the required data in the List<> format that you require. I would definitely recommend you read the two series of articles by Imar Spaanjar on Building a N-Layer Application. Note that there are two sets of series, but I linked to the second set, because it contains links to the first one.
Also, it might be beneficial to know that Sql Server 2008 R2 express edition is free to use, but has a limit of 10 GB per database. I am not saying that you shouldn't use MySQL, but just wanted to inform you in case you didn't know that there is a free edition of Sql Server available.
I'm wondering if anyone can give a solid explanation (with example) of POCO (Plain Old CLR Object). I found a brief explanation on Wikipedia but it really doesn't give a solid explanation.
Instead of calling them POCOs, I prefer to call them persistence ignorant objects.
Because their job is simple, they don't need to care about what they are being used for or how they are being used.
Personally I think POCOs are just another buzzword (like Web 2.0 - don't get me started on that) for a public class with simple properties.
I've always been using these type of objects to hold onto business state.
The main benefits of POCOs are really seen when you start to use things like the repository pattern, ORMs and dependency injection.
In other words - you could create an ORM (let's say EF) which pulls back data from somewhere (db, web service, etc), then project this data into objects (POCOs).
These objects can be passed further down the app stack to the service layer, then onto the web tier.
Then if one day you decide to switch over to nHibernate, you should not have to touch your POCOs at all, the only thing that should need to be changed is the ORM.
Hence the term 'persistence ignorant' - they don't care what they're being used for or how they are being used.
So to sum up, the pros:
Allows a simple storage mechanism for data, simplifies serialization/passing around through layers
Goes hand in hand with depedency injection, repository pattern and ORMs. Flexibility.
Minimized complexity and dependencies on other layers. (higher layers only care about the POCOs, POCOs don't care about anything). Loose coupling
Simple testability (no stubbing required for domain testing).
Hope that helps.
You need to give more details, such as the context in which you are planning to use POCO. But the basic idea is that you will create simple objects containing only the data/code that is necessary. These objects would not contain any "baggage" such as annotations, extra methods, base classes, etc that might otherwise be required by (for example) a framework.
Example of a POCO:
class Person {
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
}