Related
As a novice to this realm, I am planning on building an mvc application. I had originally started a web forms application but decided the scalability and testability will benefit more with an mvc application. I chose to switch with the added benefit of being easier to add more features later on (instead of having code baked into web forms pages).
Now a little about my application, it is an application to stimulate an RPG class builder and moveset. In all simplicity, users can register for a class, and depending on other skills they can register for, they can see a custom move set based on these categories. The way I am envisioning it is I will be able to go back and add more classes and skills later in the database and have users register for this new content immediately once it has been added to the project.
Everything lives in normalized tables, so many joint tables do exist. For each new skill or class I add will mean a handful of tables will be added to the database. This speaks to the way the data will be stored, everything and all information about classes, user data, skills, etc will be stored in the database.
I have designed all the initial database tables I will need to have at the start, and functionality I need (a home page, view skills page, view move sets page, etc.). I am stuck at the next step; where do I go? Should I make my controllers first? Models? Views? Design my page layouts? I am asking for advice from people who have taken a similar organic approach to an mvc project. I am facing analysis paralysis on what to start on, knowing I have a lot of work ahead of me.
Thank you for taking your time to answer.
I've taken everyone's advise and am putting together a website to learn MVC: http://learnaspnetmvc.azurewebsites.net
The most important advice I can give you: just start. A big project can seem overwhelming, especially when you're looking at it like a big project. Instead, break it into small achievable tasks. Find something you can do right now, the ever-so-smallest subset of functionality, and do it. Then do the next one. And the next.
That said, I'll tell you my personal process. When I start on a new application or piece of an application, I first like to create my models. That way I can play with the interactions between them, flesh out the relationships, and think about the needs of my application in a somewhat low-pressure, easily disposable way. I also use code-first, whereas you've gone an created your database tables already. Some people prefer to do it that way. Personally, I find starting with my classes and letting those translate into an underlying data store much more organic. In a sense, it relegates the database to almost a non-existent layer. I don't have to think about what datatype things need to be, what should be indexed and what shouldn't, how querying will work, what kind of stored procedures I need, etc. Those questions have their time and place -- the nascent development stage is not that time and place. You want to give your brain a place to play with ideas, and classes are a cheap and low-friction medium. If an idea doesn't work out, throw the class away and create a new one.
Once I have my models, I like to hit my controllers next. This lets me start to see my models in action. I can play around with the actual flow of my application and see how my classes actually work. I can then make changes to my models where necessary, add additional functionality, etc. I can also start playing around with view models, and figuring out what data should or should not be passed to the view, how it will need to be displayed (will I need a drop down list for that? etc.), and such. This, then, naturally leads me into my views. Again, I'm testing my thinking. With each new layer, I'm hardening the previous by getting a better and better look at how it's working.
Each stage of this process is very liquid. Once I start working on my controllers, I will make changes to my models. Once I hit the views, the controllers will need to be adjusted and perhaps the models as well. You have to give yourself the freedom to screw up. Inevitably, you'll forget something, or design something in a bone-headed way, that you'll only see once you get deeper in. Again, that's the beauty of code-first. Up to this point, I don't even have a database, so any change I make is no big deal. I could completely destroy everything I have and go in a totally different way and I don't have to worry about altering tables, migrating data, etc.
Now, by this point my models are pretty static, and that's when I do my database creation and initial migration. Although, even now, really, only because it's required before I can actually fire this up in a browser to see my views in action. You can always do a migration later, but once you're working with something concrete, the friction starts to increase.
I'll tend to do some tweaking to my controllers and obviously my views, now that I'm seeing them live. Once I'm happy with everything, then I start looking at optimization and refactoring -- How can I make the code more effective? More readable? More efficient? I'll use a tool like Glimpse to look at my queries, render time, etc., and then make decisions about things like stored procedures and such.
Then, it's just a lot of rinse and repeat. Notice that it's all very piecemeal. I'm not building an application; I'm building a class, and then another class, and then some HTML, etc. You focus on just that next piece, that small chunk you need to move on to the next thing, and it's much less overwhelming. So, just as I began, I'll close the same: just start. Writers have a saying that the hardest thing is the first sentence. It's not because the first sentence is really that difficult; it's because once you get that, then you write the second sentence, and the third, and before you know it, you've got pages of writing. The hardest part is in the starting. Everything flows from there.
The other answers here have great advice and important nuggets of information, but I think they do you a disservice at this stage. I'm the first to advocate best practice, proper layering of your applications, etc. But, ultimately, a complete app that follows none of this is more valuable than an incomplete app that incorporates it all. Thankfully, we're working with a malleable medium -- digital text -- and not stone. You can always change things, improve things later. You can go back and separate your app out into the proper layers, create the repositories and services and other abstractions, add in the inversion of control and dependency injection, etc. Those of us who have been doing this awhile do that stuff from the start, but that's because we've been doing this awhile. We know how to do that stuff -- a lot of times we already have classes and libraries we drop in for that stuff. For someone just starting off, or for an app in its earliest nascent stage, it can be crippling, though. Instead of just developing your app, you end up spending days or weeks pouring through recommendations, practices, libraries, etc. trying to get a handle on it all, and by the end you have nothing really to show for it. Don't worry about doing things right and do something. Then, refactor until it's right.
As a first step in planning a MVC framework application, We should start with a strong Model (typical C# props). This process is going to take most of our time, based on the fact that we need to understand the business first and then the relationships between different workflows and entities. So times business models evolve as time passes. So spend qualitative time on building this layer, but not too much.
Once domain (Business) Models are ready, before we actually start coding for Repository classes, we should define our Repository Contracts which are typically Interfaces. Contracts help all parties(other components) to interact with each other in the exact same way. Then we implement contracts on the Repository component, which is just going to act like PUSH and PULL data from your persistent medium (say database). Remember repository component never going to have any idea on business logic.
Once backend has been established, We can concentrate on my actual business process implementation. We can define one more level of Contract which defines all business operations which are to be done using Model classes. This interface has been implemented by BusinessLogic Component which does the core business activity (specific methods for every business operation). This particular component will use Repository component to delegate business data to persistence medium.
With above step completed, We can easily go and build Controllers. We should be calling business logic component methods in controllers and get work done. Once controllers are done, we can define our views and other UI elements like partial views etc.
Pictorial representation of the flow is as follows -
A simple architecture (from high to low level)
Presentation Layer
Domain Logic Layer
Data Access Layer
Database
Presentation layer is MVC project containing Views, Controllers and optional View-Models.
Domain Logic Layer is Class Library project which Presentation layer will access (via DLL or Service reference). This layer contains business logic and rules for the application.
Data Access Layer may contain two sub-layers-
Repository. User repository is best practice for any long term application.
Entity Framework Model.
This will communicate with database.
Database you already have.
I would suggest reading through a book by Scott Millet, called Professional ASP.NET Design Patterns.
ISBN : 0-470292-78-4
Scott walks through what a good ASP.NET site should look like from an architectural standpoint - i.e. DataAccess layers, Business Logic layers, Presentation Layers, Domain events etc.
By following on from industry standards, you will gain a better knowledge of how to correctly put together an MVC web-site.
Hope this helps.
I would suggest you to make your MVC application around a ASP.NET Web API , since it will help, in case you go for a mobile application later.
Since you are a MVC newbie, you should download some open source projects on MVC shared by seniors in the community. Study two or three projects, and analyze a solution which will the best for you.
A quick googling will get you to some good projects.
e.g.
Making a simple application , Prodinner
After that you should also go through MSDN tutorial on MVC 5 app with SSO , to enable social logins.
So just started playing with Meteor and trying to get my head around the security model. It seems there's two ways to modify data.
The Meteor.call way which seems pretty standard - pretty much just a call to the server with its own set of business rules implemented.
Then there is the Collection.allow method which seems much more different to anything I've done before. So it seems that if you put an collection.allow, you're saying that the client can make any write operation to that collection as long as it can get past the validations in its allow function.
That makes me feel uneasy cause it's feels like a lot of freedom and my allow function would need to be pretty long to make sure it's locked down securely enough.
For instance, mongodb has no schema, so you'd have to basically have a rule that defines which fields would be accepted and the format those fields must be in.
Wouldn't you also have to put in the business logic for every type of update that might be made to your system.
So say, I had a SoccerTeam collection. There may be several situations I may need to make a change, like if I'm adding or removing a player, changing the team name, team status has changed etc.
It seems to me that you'd have to put everything into this one massive function. It just sounds like a radical idea, but it seems Meteor.call methods would just be a lot simpler.
Am I thinking about this in the wrong manner (or for the wrong use case?) Does anyone have any example of how they can structure an allow or deny function with a list of what I may need to check in my allow function to make my collection secure?
You are following the same line of reasoning I used in deciding how to handle data mutations when building Edthena. Out of the box, meteor provides you with the tools to make a simple tradeoff:
Do I trust the client and get a more responsive UI (latency compensation)? Or do I require strict control over data validation, but force the client to wait for an update?
I went with the latter, and exclusively used method calls for a few reasons:
I sleep better a night knowing there exists exactly one way to update each of my collections.
I found that some of my updates required side effects that only made sense to execute on the server (e.g. making denormalized updates to other collections).
At present, there isn't a clear benefit to latency compensation for our app. We found the delay for most writes was inconsequential to the user experience.
allow and deny rules are weak tools. They are essentially only good for validating ownership and other simple checks.
At the time when we first released to production (August 2013) this seemed like a radical conclusion. The meteor docs, the API, and the demos highlight the use of client-side writes, so I wasn't entirely sure I had made the right decision. A couple of months later I had my first opportunity to sit down with several of the meteor core devs - this is a summary of their reaction to my design choices:
This seems like a rational approach. Latency compensation is really useful in some contexts like mobile apps, and games, but may not be worth it for all web apps. It also makes for cool demos.
So there you have it. As of this writing, my advice for production apps would be to use client-side updates where you really need the speed, but you shouldn't feel like you are doing something wrong by making heavy use of methods.
As for the future, I'd imagine that post-1.0 we'll start to see things like built-in schema enforcement on both the client and server which will go a long way towards resolving my concerns. I see Collection2 as a significant first step in that direction, but I haven't tried it yet in any meaningful way.
stubs
A logical follow-up question is "Why not use stubs?". I spent some time investigating this but reached the conclusion that method stubbing wasn't useful to our project for the following reasons:
I like to keep my server code on the server. Stubbing requires that I either ship all of my model code to the client or selectively repeat parts of it again. In a large app, I don't see that as practical.
I found the the overhead required to separate out what may or may not run on the client to be a maintenance challenge.
In order for the stub to do anything other than reject a database mutation, you'd need to have an allow rule in place - otherwise you'd end up with a lot of UI flicker (the client allows the write but the server immediately invalidates it). But having an allow rule defeats the whole point, because a user could still write to the db from the console.
The usual allow methods I have are these:
MyCollection.allow({
insert: false
update: false
remove: false
})
And then, I have methods which take care of all insertions. These methods perform the type checks and permission assessment. I have found that to be a much more maintainable method: completely decoupling the data layer from the code which runs on the client.
For instance, mongodb has no schema, so you'd have to basically have a rule that defines which fields would be accepted and the format those fields must be in.
Take a look at Collection2. They support schema checking at run-time before inserting documents into the Collection.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server)
Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs.
(we only have one branch code base and one database schema)
To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system.
if(ClientId = 'ABC')
{
//DO ABC Stuff
}
else
{
//Normal Route
}
One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources.
But what I feel is, this strategy makes our code and database even harder to maintain.
Anyone there crossed similar situation? How do you handle that?
Update:If this is not a right question for SO, can someone move this question to a proper stackexchange site?
Update2:
you are right. The code is becoming smelly now, and I quite sure will be a nightmare sooner or later. Our company is doing the product and to save the effort, later products for other customers are based on the previous one. I know the ideal way is seperate the #e-j-brennan dev teams into 2 parts. One team works on core product and made it highly customisable, and team two works on customising for a particular client. However if since our company is so small, it is really a dilemma situation. :(
I think you need to decide if you sell custom software, that you tailor for each client, or 'off-the-shelf' software that is one-size-fits-all (and maybe customizable thru functionality you provide).
When you only have a handful of clients like you do now, you can get away with what you are doing, but I can almost guarantee that if you continue down this road, and your client base increases and the amount of client-specific customization's increases as well, you will have a nightmare on hand; I've been thru this many times for multiple clients, and it always ends the same way. It all is manageable until it is not, and then it is a royal pain-in-the-neck that could make your life very difficult indeed.
If you decide you are a custom company, and want to have multiple versions of the software and database, that is fine, just make sure you charge the full cost for it - i.e. factor in that you may need to maintain multiple levels of source code and databases and factor in that upgrades are going to take many multiples of effort to rollout as you will need to test each client's code base.
If you decide you want to be an 'off-the-shelf' type of product, then your best bet is provide the ability for each client to customize their experience, without the need for code changes - i.e. built in the customization capability thru config screens and tables that control how things work - but everyone will still use the same underlying code and database. Much more work upfront, but saves you boatloads of time down the road.
I have also been in your position, and I agree it is a difficult one. In my case, I was building custom single-product sites for clients. While each site followed a similar layout and workflow, there had to be enough flexibility for each to have a wholly custom design, custom rules around shipping and coupons, and different merchant gateways and configurations.
After some years, we did end up with something maintainable. First, we created libraries to house all of our common code and put those libraries into a TFS project simply called Common. Then, we created a new TFS project for each site (not client, as many clients had multiple products/sites) and branched the applicable projects into them from Common. Next, we created a VS Template project that contained a skeleton of the site, including "design-less" views, controllers, and their action methods (remember, each site had the same basic flow). Also, each site ran on its own database, which was cloned from an otherwise unused and mostly empty Template DB.
With each site running on its own branch and DB, modifications could be made to the original flow and design that was installed by the template (which would never need to be merged back in) without affecting any other site. For customizing business methods, like shipping calculations, we could create a subclass of the common class and override where needed. Part of what enabled this was converting all our code to use Dependency Injection. Specifically, each Controller had injected Services, and each Service had injected Repositories. Merchant Processing was also coded to an interface and injected. Also worth mentioning is that this allowed us to hard-code all of the upsell logic for each site (you bought product X, so we recommend Y), which was much easier to create and maintain compared to defining complex configuration rules in our old upsell rule engine. I don't know if you have anything like that...
Sometimes we would want to make a change to the Common code itself, which was usually prompted by a specific need for a specific site. In that case, we'd make the change on that branch, merge it to Common, and then merge it to the other sites at our convenience (great for "breaking" changes or changes that also required a change to the DB). Similarly for DB changes, we would update the Template DB and then write a little script to update the other site DBs with the same schema changes ( still had to be smart and careful about it).
An added benefit was that we also created Mock repositories that would be used/injected in a "Design" build configuration, which enabled the designers to jump around the application and work on screens without literally submitting themselves to the workflow. It also allowed them to start working on a site before there was anything done on the back-end, which was very important for those anxious clients who need to "see something".
10+ clients is definitely not a small number with what you're talking about. Three was pain enough for me. We had over 30 sites running at one time, maintained by three developers and two designers.
Finally, I know it's outside the scope of your question and a bit presumptuous, but getting "final" client sign-off on design before the designers actually went about implementing it (and before devs did their thing) also saved us a lot of costly rework. I know no design is final, but increasing efficiency on the implementation end gave the clients less time to change their minds about the design they approved.
I hope that at least gives you some approaches to think about.
People working with systems that have to change or be customized, have developed patterns to handle such concerns.
You should definitely start by reading a good book on Inversion of Control. In short, you can build your systems by defining building blocks (contracts, expressed as interfaces) and provide multiple implementations. There are multiple benefits of such approach but to mention just two:
- you can handle customizations by providing diffent implementations of the same interfaces
- you can reconfigure your application statically or dynamically but both approaches are far more clean than your "if"
When it comes to the data layer, study the repository pattern. It helps to organize the data access in a way that you can switch between different providers. It fits great wiht ioc.
And just a technical tip - nhibernate supports dynamic properties. You just provide additional columns in the mapping and nh is able to support it from the same code base. This way you can target different databases with slightly different db schemas.
We've been having a discussion at work about whether to use Domain Objects in our views (asp.net mvc 2) or should every view that requires data be sent a ViewModel?
I was wondering if anyone had any pros/cons on this subject that they could shed some light on?
Thank you
I like to segregate my Domain Objects from my Views. As far as I'm concerned, my Domain Objects are solely for the purpose of representing the Domain of the application, now how the application is displayed.
The presentation layer should not contain any domain logic. Everything they display should be pre-determined by their Controller. The ideal way to ensure this is always adhered to is to ensure the view only receives these flattened ViewModels.
I did ask a similar question myself. Here's a quote from the answer I accepted:
I think that there are merits to
having a different design in the
domain than in the presentation layer.
So conceptually you are actually
looking at two different models, one
for the domain layer and one for the
presentation layer. Each of the models
is optimized for their purpose.
If I have the domain objects for Customer > Sales > Dispatch Address, then I don't want to have to deal with the object traversal in my view. I create a flattened view model that contains all of the properties. There's almost no extra work in mapping to and from this flattened view/presentation model if you use the excellent open source project AutoMapper.
Also, why would you want to pass an entire domain object back to a view if you can create an optimised representation of that model?
If you use NHibernate or similar - your domain objects will most likely be proxies, serializing these dun work. You should always use a ViewModel and map your domain objects to DTOs within your viewmodel. Don't take shortcuts here. Setting the convention will alleviate the pain you'll suffer later on.
It's a standard pattern for a reason.
w://
It depends. In some case it will be fine to use instances of model classes. In other cases a separate ViewModel is the better choice. In my experience it is perfectly acceptable to have different models in your domain and in your views. Or to use the domain model in the view. Do what works best for you. Do a spike for each option, see what works and then decide. You can even choose a different option for each view (and/or partial).
There are definitely going to be simple little apps where it's fine to use the same models across all layers. Generally little forms over data apps. But for a proper domain, my thoughts on the subject are to keep the domain models and view models separate because you don't want them to ever impact each other when changed.
If the domain logic needs a small change to process some new business logic on the back end, you don't want to risk that altering your view. Conversely, if marketing or someone wants to make changes to a view, you don't want those changes leaking back into your domain (having to populate fields and maintain data for no other purpose than some view somewhere is going to use it).
I have a good comparision currently because I'm working on two projects using different approaches. I'm far from stating that "this is bad and this is good" because this is written in some patterns. I know patterns, I like patterns, but I never blindly follow them just to be right. I always use what do I need currently to achieve current goals.
In first app, using domain objects in view, development is very quick. Few changes in few places and you have additional properties, form inputs etc. You don't bother about the layers, just extends/change the code and pass to another problem.
In the second app, where there are always object for use here, there and somewhere else, there's a dozens of classes looking the same, doing the same, and a ton of conversion code between various version of the same objects. More bad is that some developers do some logic on "this version" of class, and other logic is done on "that version". Development is very painful and requires a lot of testing afterwards. Changing a simple thing requires a lot of attention and a lot of code need to be changed. I really don't like this app for that, because I've never yet seen a business benefits from this approach, at least during last year (and we are in the production stage from the year). This app is three-four times more expensive to develop and maintain than the first one.
So, my funny answer on the question is: it depends. If you work in 10-20 people team, you like to come into the work, drink few coffies, talk with friend, do few simple things and go home, a lot of intermediate objects and conversion code will be good for you. If your goal is to be fast and cheap, if you want to focus on business layer, new features, quick changes following, and more if you touch software business and want to cash your project (we do all this stuff to be finally sold, right?), the second approach would be probably better.
This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How effective is obfuscation?
Protect ASP.NET Source code
(Why) should I use obfuscation?
Is obfuscation the best answer for protecting our code ?
*Specially in Web Projects when you want to deliver your web projects as libraries of code to your customer ( the person who ordered ) *
Edited
At first my priority is Server-Side Code
and second Client-Side
but the main goal is when you want to deliver a complete web project
and you made every piece of your code as components and dlls now how effective can you protect them and doesn't allow others to make your code back from them .
Edited
The problem is that I want to protect the code that I'm written to a company that they ordered , now all my code are inside some DLLs ,
Now they can reverse engineer that and get my code , I want to prevent them from doing so ,
Is there anyway to do so or not ?
I think that is a unique question , And I didn't ask for what obfuscation is nor for tools of doing this activity , further than that I think this is apart from Client-Server Security
Sorry if my question wasn't clear at first , but if that is really a case to be deleted , no problem for me
Also
Also I wanted to have a comparison look at this problem and the solutions ,
because I think obfuscation wasn't the only possible solution at this , I think we can have maybe some logical sort of workarounds about this problem
Maybe not the best. If you are really ambitious, you can write your own web server (plugin).
But is it worth the effort?
Software is similar to a bike in the Netherlands, there is no known way of protection that is 100% safe. You use either a better protection than the other bikes (thieves are lazy). Or you must obfuscate the bike so they won't take it.
Another way to increase the level of protection is to use custom made ActiveX code to store mission critical algorithms. Of course, they can be reverse engineered, but javascript is easier.
What exactly are you trying to protect your code from?
Does your client-side code contain valuable business logic?
If not: you shouldn't bother obfuscating something that doesn't have much value. Personally I think clientside code theft is a something that people are far too concerned about. 99% of web apps don't really have anything special in terms of implementation on the client side. What you need to worry about more is someone ripping off the idea or visual look, which you obviously can't obfuscate.
If it does: you need to consider refactoring that logic out of the client side, as even with heavy obfuscation, a determined party will always be able to untangle it relatively easily. The code that adds real value to your app should ideally be running on your servers where it's considerably more difficult to get access to.
Even if people stealing your html markup or javascript was a something to worry about (and it probably isn't), obfuscation doesn't really solve the problem. In my opinion it is a waste of effort and money.
Hosting a critical function as a web service is probably the most sure way to protect it. It keeps the code out of the user's hands entirely. But then you're stuck hosting a service, and your users have to be on line to use your functionality.
Obfuscators help by hiding useful names and replacing control flow with weird but logically equivalent alternatives. They might thwart an amateur, but they'll only slow down a skilled reverse engineer for a few minutes, and they won't stop someone who is determined to penetrate your secrets.
I you really want to protect your code, you should write native code using a native code compiler (C++, Delphi). This still does not guarantee that your code is 100% safe because any experience developer can read assembler and essentially disassemble the native code program.
A determined hacker will always find a way to get to what they want.
The best we can do is to make it hard or painful for the would-be hacker to get at our code and the following options can help us:
Customize the CLR engine
Run an obfuscation tool over your code and use name and control flow obfuscation and string encryption
Make the application a Web-based application where all your proprietary code sits on a server somewhere
Watermark your code using your own custom techniques to "throw off" the would-be hacker
Implement techniques to prevent debugging (this is a very advanced topic!)
I really like a comment made by one of the head developers of the .NET framework where he said that he does not feel it's really the fact that others can get at our code that should be a concern to us, but rather, we should concern ourselves with the level of support we provide with our products.
So if we provide a good support base, it does not matter what the hackers do with our code, because the clients will trust us and our ability to support them using our product and not some cheap hacker-hacked program.
NO, obfuscation is not the best way to protect your code.
The tool you need to use is "copyright".
There is no (technological) way you can protect you code from someone determined enough (provided they have access to the binaries / scripts).
What you can do is prevent them from legally modifying/distributing your code.
The normal server-side code in Web projects should under no circumstances be visible to the outside world. So there is no point in obfuscating the code.
Besides that two minior points:
Javascript code is visible to the user and can be obfuscated. Minimizing javascript to save bandwidth is recommended anyway. Minimizing js also obfuscates the code.
Also important is that on production system the configuration setting customErrors should be set to RemoteOnly or On to avoid showing a stacktrace with to much code details.
If your client side code has any broad value to others, it will get reverse engineered regardless of any obfuscation.
The reality is that it's likely not going to be broadly useful to many and there is a lot of other code out there to look at so probably not worth doing more than minifying the code which is plenty of obfuscation and if your code is large, it will improve download speed.
Have you considered the alternative? That it's a good thing to give somethings back to the community? I'm sure you've looked at the code of more than one site, no?