Simple explanation of "Unified Modelling Language", why is it important? - asp.net

i am new to asp.net and i was told that i need to know UML thoroughly to build successful software, is this correct? i mean cant i just "code and fix" and "model" in my brain. how important is UML and what is the best way to learn it?

UML is a standard used to convey information about the design of an object oriented software system in a (mostly) graphical format.
It is important as it makes communicating about such a system easier.

UML is a simple way to graphically document a system and its interactions. Its value to you is not necessarily using UML itself, but the value of documenting a system before coding or modifying it.
I see the advantages as
You are forced to think about the whole design before you start building. A drawing is a lot easier to change than a whole lot of code. Mistakes at this stage are easier to rectify.
It helps to understand the system and interactions far better than code alone can. With all the self-documenting code that you can write, a good diagram will nearly always be clearer to and fast to pick up.
The disadvantages are
Takes time to produce, when you would probably rather be coding and experimenting
Can get out of date if effort is not put in to keeping up to date.

i mean cant i just "code and fix" and
"model" in my brain
You can. It's done every day. And a lot of what's going through it at the time, and how you convey your ideas on napkins is already a form of UML without its rigid nomenclature.
But if you're working with colleagues on big systems, there may be occasions were they speak the language.

Related

What is a good language to develop in for simple, yet customizable math programs?

I'm writing to ask for some guidance on choosing a language and course of action in learning programming. I apologize if this type of question is inappropriate for Cross Validated, please advise me to another forum if that is the case.
I've seen thread after thread with questions from newbies, asking, "What is the best language to start with?" and then it always starts a flame war or someone just answers, "There's no best language, it's best to pick one and start learning it." My question is a little bit more focused than that.
First off, I've been programming my whole life, in very limited capacities. My deepest training was in C++. Whilst in my EECS degree program, I resolved to never be a software developer because I couldn't stand not interacting with people for such long periods of time. Instead I realized I wanted to be a math teacher, and so that is the path I have taken.
But now that I'm well down that path, I've started to realize that perhaps I could develop my own software to help me in the classroom. If I want to demonstrate the Euclidean algorithm, what better way than to have a piece of software that breaks down the process? Students could run that software as part of their studies, and the advanced students might even develop programs for themselves. Or, with an Ipad in hand, why not have an app that lets students take their own attendance? It would certainly streamline some of the needs of classroom management.
There's obviously a lot of great stuff already out there for math, and for education, but I want a way to more directly create things specific to my lectures. If I'm teaching a specific way of calculating a percent, I want to create an app that aligns with my teaching style, not just another calculator app that requires the student to learn twice.
The most I use in class right now is iWork Numbers/Microsoft Excel for my stats class. Students can learn the basic statistical functions, and turn some of their data into graphs.
I have dabbled a bit with R, and used Maple in college. I've started the basic tutorials for OS X/iOS development and have actually made good progress making an OS X app that takes a text string, converts it to numbers, and performs encryption using modular addition and multiplication. I sometimes use Wolfram|Alpha to save myself some time in getting quick solutions to equations or base conversions. I know of MatLab, Mathematica, and recently people have been telling me to check into Python or Ruby. I also know basic HTML, and while it's forgotten now, learned Javascript and PERL in college.
If I keep on the path of Obj-C/Cocoa, I think it will have great benefits. Unfortunately, anything I produced for Mac would only be usable on a Mac, so it wouldn't be universal for all of my students. Perhaps then learning a web language would be better. Second, I'm wondering if the primary use is mathematical, then perhaps my time would be better spent learning Mathematica Programming Language, or R, or something based less on GUI and more on simple coding of algorithms, maybe Python or Ruby?
It seems that Mathematica already has a lot of demos for different math concepts, so why reinvent the wheel is also a question I have. I think overall, it would be good to have more control and design things the way I need. And then, if I do want to make an "Attendance" app or something else, I would already have the programming experience to more easily design something for my iPad or MacBook.
The related question to this is what is a good language to teach to my students? In his TED talk, Conrad Wolfram says one of the best ways to check the understanding of a student is have them write a program. But if Mathematica does the math virtually automatically for them, then I'm not sure that will get the deeper experience of working out logic for themselves, like you do when you're writing C, or a traditional procedural language.
I know that programming takes time to learn, but I also know that at this point, my goal is not to be able to make an app like "Tiny Wings." With the app store ease, some of my work may be an extra revenue stream, but I see myself as more of a hobbyist, and now teacher looking to software development specifically for its ability to help me demonstrate mathematical concepts.
I think I will push ahead with Obj-C/Cocoa for OSX/iOS, but if anyone has some better guidance regarding all of the other available stuff, it would be much appreciated. I don't think I would want to go fully to the web (I like apps), but perhaps someone could suggest a nice way of bridging what I produce in XCode to a universal web version. For example, if you come up with an algorithm in obj-c is it easiest to transition that to ruby and run it online, or is there another approach that works better?
Mathematica is pretty awesome for the first part of your question. I've used the interactive mode (Manipulate[]) for explaining things to my colleges (and myself). It makes really nice dynamic figures and is fairly expressive (although your code can end up looking like line noise). It is very powerful, but it does far less for you than you might think. It's pretty intuitive, which is a good thing for teaching.
You could use Scala if you want an "easy" way to make a domain specific language for teaching. Python seems to confuse people as a first programming language. Objective C seems like a completely random choice to me.
Mathematica then. It's worth the price. But anything that is interpreted and has an interactive shell is probably better than a compiled language. BBC BASIC?
Nothing beats Haskell for general-purpose mathematical programming. The wiki's quite extensive and the IRC channel (#haskell on Freenode) is great for asking questions. If you statically link your binaries on compilation, you should be able to run your programs on just about any system (with a few exceptions, e.g., libgmp).
Haskell code reads (roughly) like mathematical notation once you get the hang of it, so it can really help to tie things together for your students who are motivated to write their own programs. The purely functional style can be beneficial, as well, since it focuses less on I/O and the marshalling of data (perfectly useful in applications, perhaps less so in pure math), and more on the actual creation and refinement of functions and algorithms. You can even compose functions just as you would on paper.
If you want to get really serious, you could also look into Coq or Agda, but those might be a bit much for most classes.
For a Haskell program idea for an educator, check out this link.
A nice list of arguments can also be found at:
Eleven Reasons to use Haskell as a Mathematician and the book The Haskell Road to Logic, Maths and Programming

What does a developer need on the front end to ensure a successful project?

I have an idea for a business that requires a well designed web application. I'm not a rocket surgeon, but I'm smart enough to know that you get what you pay for and am willing to pay for talent. However, I want the development process to go as smoothly as possible and would like to know how to make that happen.
So, what information do developers need (or want) initially from the owner to avoid having to make assumptions about business (or other) requirements? Do I need to create state transition diagrams or write use cases?
Essentially, how do I take the concept in my head and package it in a way that allows the developer to do what they do best? (assuming that is creating good software. haha)
Any advice is appreciated.
Shawn
You may need to reword your question, as it is too general to get a good answer, so some vague details would be helpful.
But, the better vision you have of what you want the smoother it will be.
I find UML diagrams too confining, when you aren't going to be doing the work, as you may not come up with the best design.
So, if you start with designing out what each page should look like, as you envision it, then you can write up use cases, which are short scenarios.
So, you may write up:
A user needs to be able to log in using OpenID.
This will tell the developer one function that you want, and who you expect to do that action.
But, don't put in technologies, as you may think that a SOAP service is your best bet, but upon talking about it you may find that there is a better solution.
Use cases are good points to show what you are envisioning, and give text to your page designs.
Talk to the developers. Explain what you want and why you want it. Together you make the flow charts and whatnot. Writing requirements is part of the design process, and it's a good idea to have the developers onboard as soon as possible. Start simple and small, then grow and expand while iterating.
In talking over web services before, I have found the best starting point is drawing on a sheet of paper what you think the site will look like, and add in a few arrows from things you want clickable to the pages that should result. Keep it simple, nothing too fancy, and hopefully you and the developer can come to an understanding of what you want pretty quickly.
Use cases might be best for checking off all the points later in the project about how complete your site is; I haven't really found it to be a helpful starting point, but I'm sure others disagree. (They just seem too tedius to read when actually writing code.)
Same with state transition diagrams; they are too tedious and I think most developers will assume you made mistakes in them anyway. :) Everyone else does... Unless your project hinges very tightly on the correctness of a state machine, I wouldn't really bother.
This book contains some good advice on what constitutes a good statement of requirements from a programmers point of view. It also has the useful guideline of not trying to set the form of your requirements too early, and a substantial piece on describing the problem you are trying to solve.
I like UI mockups based on actual program/site flows e.g registering a customer or placing order. Diagrams/pictures of GUIs with structured, consistent data examples are unambiguous.
I agree that UML and use cases are only really useful if everyone speaks UML and the projects are of sufficient complexity (few are).
You may want to read up on Agile/Scrum techniques. These are becoming a sort of standard and when properly managed can save weeks of development time.
I find that words don't do a good job of communicating how a system is supposed to work. Wireframes, white-board drawings/transition diagrams, and low-fidelity prototypes are great ways to communicate a concrete idea. One example of a low-fidelity prototype is a "clickable" paper prototype that allows a user to touch "buttons" on paper to go from one drawing to another. It costs very little time (cheaper), but goes a long way to communicate an idea between two parties.
Stay away from formal documentation, UML diagrams, or class (technical documentation) diagrams that don't speak to you. This is what large, risk-averse companies move toward to be more "mature". These are also byproducts of an idea that is hashed out, and it sounds like you're in the hashing out stage.

E-R to OO design strategies?

I'm still learning all of the powers of OO design and have much more experience in database (in particular, E-R) designs. Each time I approach a problem and attempt to come up with a design following OO strategies, my diagrams(UML classes, for example) come out looking like an ERD. I've read/heard it's then smart to map a class to each table and work from there... But this never really seems to get me anywhere and my designs have very high (bad)coupling which, as I understand, is a big "no-no" in OO.
A few google searches returned a few hits on moving from E-R to OO, but nothing that really drilled it home for me. Does anyone have any materials on this topic, or have perhaps struggled with this similar problem?
To expand just a bit, my attempted OO designs tend to move towards an implied persistent data storage element which doesn't necessarily exist in an OO design.
Thanks for any guidance!
Database Systems: A Practical Approach to... is the textbook(chapter 3~4) which I would recommend.
I think the fundamental differences in data(relational data model) and program are the main gap between E-R and OO design. You may draw database schema design in UML, but it doesn't
mean that realational data model would become any sensible meaning of OO paradigm.
The programs, from another side, focus on processing correctly with reusability discipline. The data, however, focus on persisting correctly with performance discipline.
Although there are some techniques to ease the gap(lik O-R Mapping), but the basic purposes on data/program are not totally the same.
So I think the OO is just a technique to abstract the design, not the goal of the design.
I'd suspect from what you write that you need more experience with / knowledge about some core OO design principles, in particular inheritance and polymorphism. A good understanding of these concepts can help you better understand the relationships between your objects, and the ways in which they should be coupled.
Given your comments about your OO designs moving towards an implied persistent data storage element, you might also want to look into such concepts as Aspect Oriented Programming (Spring is a great tool for this). Also, look into what an ORM such as Hibernate can do, and how it does it (this may be a bit advanced, though).
There's really only one way to learn object-oriented software design: learn it from scratch. You won't find any shortcuts for converting your knowledge of another software design method into an understanding of object-oriented design. You need to start with the basics, just like everyone else: encapsulation, abstraction, is-a and has-a relationships, etc.
E/R concept model can help you whenever you need to design relationships between an entity. You shouldn’t care how they are going to be implemented at design time : the can be converted into Class,DataTable,XML,....
what you are asking it's a bit different. In a small system or when the business logic is not too complex it is possible to have a domain model object that looks like the Data Table.In this case you can have an object per table. This pattern is called "Table Module Pattern"
http://martinfowler.com/eaaCatalog/tableModule.html
Use Nhibernate or EF or any other ORM in a system like the one mentionated earlier it's a waste of resource and time because you are adding a layer that you don't really need

I am compiling a rules of programming mindset for my team: What are yours? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have been working on a list for a while that helps me share the why of programming approach and thought as much as how to do something.
For this, I wanted to build a list of things that are:
best practice,
best thought,
best approach...
that help a programmer's ability to analyze, think, approach, solve and implement in the most effective way.
I have seen dozens of incredibly valuable comments in questions throughout Stack Overflow, but I couldn't find a place where we keep them together. There is the most controversial opinion on Stack Overflow. However, I'm just looking for sagely insights that can be shared and help my team, and I approach and solve problems better through better programming.
Hopefully this can be one place to gather the one or two liners that are concise, profound and easy to share, repeat, review. If we keep it to one rule per answer it might be easiest to vote up/down.
I'll start with the first.
DRY - Don't Repeat Yourself - In code, comments or documentation.
Always leave the code a little better than when you found it.
Code does not exist until entered into a versioning control system.
Don't be afraid to admit "I don't know" and ask.
10 minutes asking someone could save a day pulling your hair out!
KISS - Keep it simple, stupid.
Pick the simplest solution that works.
Don't make things (too) complicated before they need to be.
Just because everyone else is using some complicated framework to solve their problem, doesn't mean you have to.
Don't reinvent the wheel
If there ought to be a function for it in the core library - there probably is.
Maintainability is important.
Write code as if the person who will end up maintaining it is crazy and knows where you live.
Someone else won't fix it.
If a problem comes to your attention, take ownership long enough to ensure it will be taken care of one way or another.
Don't optimize unless there's a demonstrable problem.
Most of the time when people try to optimize code before it's been proved necessary, they'll spend a lot of resources, make the code harder to read and maintain, and achieve no noticeable effect. Sometimes they'll even make it worse.
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
- Donald Knuth
How hard can it be?
Don't let any problem intimidate you.
Don't Gather Requirements -- Dig for Them
Requirements rarely lie on the surface. They're buried deep beneath layers of assumptions, misconceptions, and politics
via The Pragmatic Programmer
Follow the SOLID principles:
Single Responsibility Principle (SRP)
There should never be more than one reason for a class to change.
Open-Closed Principle (OCP)
Software entities (classes, modules, functions, etc.)
should be open for extension, but closed for
modification.
Liskov Substitution Principle (LSP)
Functions that use pointers or references to base
classes must be able to use objects of derived classes
without knowing it.
Interface Segregation Principle (ISP)
Clients should not be forced to depend upon interfaces
that they do not use.
Dependency Inversion Principle (DIP)
A. High level modules should not depend upon low
level modules. Both should depend upon abstractions.
B. Abstractions should not depend upon details. Details
should depend upon abstractions.
Best Practice: Use your brain
Don't follow any trend/principle/pattern without thinking about it
I think almost everything that is listed under "The Zen of Python" applies for every "Rules of Programming Mindset" list. Start with 'python -c "import this"':
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Test Driven Development (TDD) makes coders sleep better at night
Just to clarify: Some people seem to think TDD is just an incompetent coder's way of limping from A to B without borking everything up too much, and that if you know what you're doing, that means there is no need for (unit) testing methodologies. That completely misses the point of Test Driven Development. TDD is about three (update: apparently four) things:
Refactoring magic. Having a full set of tests means you can make otherwise insane refactoring stunts, juggling the entire structure of your application without missing even one of the two hundred crazy subtle side effects that result from it. Even the best programmers are reluctant to refactor their core classes and interfaces without good (unit) test coverage, because it's damn near impossible to track down all the little 'ripple effects' it causes without them.
Detecting pitfalls early. If you are writing tests the right way, it means forcing yourself to consider all the fringe cases. Often, this leads to better design choices once the actual development begins, because the coder has already considered some of the trickier situations that may call for a different inheritance structure or a more flexible design pattern. The need for these changes is often not apparent - or intuitive - during initial planning and analysis, but those exact changes can make the application much easier to extend and maintain down the line.
Ensuring that tests get written. TDD requires you to write the tests before writing the code. Sure, that can be a pain in the ass, since writing tests is tedious compared to writing actual code - and often takes longer, too. However, doing so is the only way to make sure the tests will be written at all. If you think you'll remember to write the tests once the code is done, you're almost always wrong.
Forcing you to write better code. Since TDD forces all code to be testable (you don't write code before there is a test for it), it requires you write more decoupled code so that you can test the components in isolation. So TDD forces you to write better code. (Thanks, Esko)
Google before you ask your colleague and interrupt his coding.
Less code is better than more, as long as it makes more sense than lots of code.
Habits of the lazy coder
The first time you are asked to do something, do it (right).
The second time you are asked to do it, make a tool that does it automatically.
And the third time, if the tool doesn't cut it, design a domain specific language for generating more tools.
(not to be taken too seriously)
Be a Catalyst for Change
You can't force change on people. Instead, show them how the future might be and help them participate in creating it.
via The Pragmatic Programmer
Don't Panic When Debugging
Take a deep breath and THINK! about what could be causing the bug.
via The Pragmatic Programmer
You may copy and paste to get it working, but you may not leave it that way.
Duplicated code is an intermediate step, not a final product.
It's Both What You Say and the Way You Say It
There's no point in having great ideas if you don't communicate them effectively.
via The Pragmatic Programmer
Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.
From: Coding Horror
Build Breaker Buys Lunch
Publish Early, Publish Often
Build it correct first. Make it fast second.
Frequently conduct code reviews
Code review and consequently refactoring is an ongoing task. Here is a few goodies about code review in my opinion:
It improves code quality.
It helps refactor reusable codes into reusable libraries.
It helps you learn from your fellow developers.
It helps you learn from your mistakes and refresh your memory about a genius code you have written before.
Anything that could affect how the application runs should be treated as code, and that means putting it in version control. Especially build scripts and database schema and data (.sql) files.
Take part in open source development
If you are using open source code in your projects, remember to post your bugfixes and improvements back to the community. It's not a development best practice per se, but it's definitely a programmer mindset to strive for.
Understand the tools you use
Don't use a pattern until you've understood why you're using it; don't use a tool without knowing why; don't rely on your framework or language designer always being right for your situation, but also don't assume they're wrong until proven to be!
Convention over Configuration
Especially where conventions are strong and some flexibility can be sacrificed.

Design or prototype first? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
When first approaching a project is best to step back and think through everything or just dive in and start coding and polish at a later date? Essentially, do you design first or try to rapidly prototype?
I have been burned by both methods, sometimes I try and think everything through but when I actually get down to the nitty gritty I encounter problems that I didn't take in consideration, and sometimes when I code first I end with code that needs to redone to fit in with a better overall design. Alot of my problems stem from inexperience, but any advice is welcome.
Go incrementally and iteratively.
Design a bit, implement a bit.
Starting with a design you can suffer from a tunnel effect where you cannot have any real feedback before you actually implement something.
Starting without design, you can take decisions you'll regret.
The ideal situation is to be able to implement a very skeletal end-to-end version of your system that can be tested, and demonstrated to the customer.
It is always safer to design first, but this does not mean prototyping does not work. The real problem with prototyping is resisting the urge to keep the code you already wrote instead of throwing it away when the time comes to do the design.
There is no silver bullet. It seems like design first is the preferred approach. But you will not be able to predict all complications that can arise while implementing your design. Some of them could potentially be show stoppers. Plus, if you're writing for a client, it's good to be able to show something just to make sure that you're on the same page.
At my workplace we do both - we do a rapid prototype, just to get feedback and get an idea of any potential problems. Then we do a formal design and formal implementation. In most cases we are able to salvage a lot of code from the prototyping stage. I like this approach, since we usually end up with clean, maintainable code.
See Gall's Law. The key is to iterate: design a little, implement a little, test a little, then repeat until you (or your customers) are satisfied. This is the essence of the new breed of "agile" methodologies.
It depends.
Prototyping is most useful when the requirements or a solution aren't necessarily clear. As an example, I am doing a data warehousing project in an environment (large commercial insurance) where financial reconciliation is a big deal. This project has involved a large prototyping exercise to get a system that will reconcile to the financials. As the business rules surrounding this were not well documented, the prototype was instrumental in exposing all of the corner cases.
In other cases, a design-first approach might be more appropriate. This is most applicable where requirements and a sensible solution architecture are reasonably obvious.
You must have some idea of a cohesive architecture before you start working. This is especially true of large scale systems.
Prototyping could be used for particular aspects of the design, e.g. presentation layer.
I think it depends on what kind of business requirements you have up-front. If they are (relatively) detailed and complete, then I'd design based on those requirements. If you have barely anything to work with in the beginning, then prototype out and show your customer what you got, to receive further requirement info.
You should develop using Agile Methodologies. Simply put, you design has you go. The team together with the product owner define a list of topics to develop, order them by importance, and split the development in iterations. Each iteration as features to be developed and on the start of the iteration is design each feature.
See more here.
When first approaching a project, prototype. But don't prototype everything. Prototype one important thing (one "use case" if that means anything) and "turn the inner eye to follow its path" - keep an eye out for the practical problems you encounter in trying to get that one thing done.
Now that you have some idea what it takes to do an important thing, you can design from more than just first principles.
Of course, this assumes you're working in an environment where you can turn out prototypes at minimal cost to ongoing development efforts. But if you're working in such an environment, pepper your design discussions liberally with prototypes. With any luck you may get to keep some of them.
note that agile methods are not an excuse to avoid designing, they just encourage testing of the design more frequently, and in smaller increments
i like to sketch the design and break its elements down until reasonably sure that there are no obvious unknowns or risks; unknowns and risks are highlighted for 'spike' projects, with a time-box for determining feasibility and notes on possible alternatives if the preferred methods prove unworkable
once comfortable with the overall architecture, jump into the features bottom-up (or in priority order) to complete the design, write the initial tests, then implement
EDIT: note that the question "design or prototype first" is making a bad assumption, i.e. that it is possible to prototype without doing any design, which of course is not the case (unless you are using the million-monkeys methodology)
Design first, unless you're willing to take the risk of throwing out all the work put into your prototype when you find it can't do what you need it to do. At a minimum, you should make some high level designs for your project that can help you make some decisions about how you're going to build your prototype so that you will have a minimum of wasted effort.
If I know what I want to build, I just go right to design.
If I'm building something for a client, then I prototype to ease out more specific requirements from the users.
Maybe not an answer but a suggestion from my experience.
In most cases I'd be better off if I had started coding earlier. You can design until the cow comes home, but if the cows are on the horizon when you start coding, you might find your careful design hard to implement in time.

Resources