In a paragraph or less, what is the purpose and benefits of pointers? - pointers

See title. That's all I have to ask. The net doesn't have many succinct answers to this question. Please keep in mind stack vs heap. Explain as you would to a complete beginner. Just looking for the "why" not the "how".
edit
Are pointers a way to get large objects out of the stack?

When passing a huge object from one piece of your program to another to be worked on like an entire class for example or something with a large amount of data like an image or video passing every single bit of data would be very inefficient. Instead you can just pass a tiny little memory address (pointer) that the receiving part of your program can then use to get to the object to be worked on.
Aside from that huge aspect, they offer a lot of flexibility but I need more than a paragraph for that.
When you get into managed code like C# or Java EVERYTHING is done with pointers/references but it's all behind the scenes and you don't have to deal with them like you would in C++ or another similar language. But it's still crucial to understand how they work.
Edit in response to:
"why would I pass a large object around if I don't need to work on
it?"
You wouldn't. However; Correct me if I'm straying from what your asking but what you'll learn if you continue into Computer Science is that a piece of your program should be as simple as possible it should only do 1 thing. Commonly known as the Single Responsibility Principle this dictates that you will have many seemingly tiny parts of your program that will all work together to accomplish the over arching goal. That means that a lot of those tiny pieces are going to need to work on the same objects, the same data and use the same tools to get the job done. Lets look at a hypothetical.
You're coding a simple image editing application.You're going to need a cropping tool, a paint brush tool, a selection tool, and a re-size tool. Each of these tools are going to need their own place in your program (a class or more likely many classes that work together) and that class will have many smaller pieces (methods/functions and other things) that work together to accomplish the goal of that class. Every single one of these classes and methods is most likely going to need to look at or modify the image data. With a pointer you can provide them with a memory address instead of making an entire copy of the image. That way when one of the classes or methods makes a change to it you don't need to worry about managing all these copies and making sure they all get the same change.

It allows you to do pass-by-reference/shared data structures, which has two big features: it saves memory and CPU overhead by not making copies, and it provides for complex communication patterns by making changes to shared data.

Related

What draws the line between reflective programming & non-reflective programming like simple softcoding?

Not sure if this is the right place to bring up this kind of discussion but, Im reading https://en.wikipedia.org/wiki/Reflective_programming and feel I need a bit further clarification for where the line between "reflective" & non-reflective programming really goes. Theres a series of examples of reflective v non-reflective code towards the end of the wikipedia page where all the "reflective" examples seem to access data with string identifiers - but what would actually differentiate this from say putting a bunch of objects in a collection/array of some sort and accessing them by an index - say compared to accessing them by an array of string identifiers that you can use to fetch the desired object?
In some languages you can clearly see the difference & benefit, like in Python & JS they have the eval method that lets them insert all sorts of code at runtime that can be pretty much endlessly complex and completely change the code flow of an application - and no longer limited to accessing mere special type objects. But in the examples listed on the wiki page you can also find examples where the "reflection" seems limited only to accessing specially declared objects by there name (at which point Im questioning if you can really argue that the program really can be considered to be "modifying" itself at all, at least on a high level in conceptual point of view).
Does the way that the underlying machinery producd by the compiler (or the way that the interpreter reads your code) affect whats considered to be reflective?
Is the ability of redefining the contents of existing objects or declaring new objects without a "base class"/preexisting structure created at compile time that differentiates reflective & non-reflective code? If so, how would this play with the examples at the wikipedia page that doesnt seem to showcase this ability?
Can the meaning of "reflective programming" vary slightly depending on the scenario?
Any thoughts appreciated <3

What does a developer need on the front end to ensure a successful project?

I have an idea for a business that requires a well designed web application. I'm not a rocket surgeon, but I'm smart enough to know that you get what you pay for and am willing to pay for talent. However, I want the development process to go as smoothly as possible and would like to know how to make that happen.
So, what information do developers need (or want) initially from the owner to avoid having to make assumptions about business (or other) requirements? Do I need to create state transition diagrams or write use cases?
Essentially, how do I take the concept in my head and package it in a way that allows the developer to do what they do best? (assuming that is creating good software. haha)
Any advice is appreciated.
Shawn
You may need to reword your question, as it is too general to get a good answer, so some vague details would be helpful.
But, the better vision you have of what you want the smoother it will be.
I find UML diagrams too confining, when you aren't going to be doing the work, as you may not come up with the best design.
So, if you start with designing out what each page should look like, as you envision it, then you can write up use cases, which are short scenarios.
So, you may write up:
A user needs to be able to log in using OpenID.
This will tell the developer one function that you want, and who you expect to do that action.
But, don't put in technologies, as you may think that a SOAP service is your best bet, but upon talking about it you may find that there is a better solution.
Use cases are good points to show what you are envisioning, and give text to your page designs.
Talk to the developers. Explain what you want and why you want it. Together you make the flow charts and whatnot. Writing requirements is part of the design process, and it's a good idea to have the developers onboard as soon as possible. Start simple and small, then grow and expand while iterating.
In talking over web services before, I have found the best starting point is drawing on a sheet of paper what you think the site will look like, and add in a few arrows from things you want clickable to the pages that should result. Keep it simple, nothing too fancy, and hopefully you and the developer can come to an understanding of what you want pretty quickly.
Use cases might be best for checking off all the points later in the project about how complete your site is; I haven't really found it to be a helpful starting point, but I'm sure others disagree. (They just seem too tedius to read when actually writing code.)
Same with state transition diagrams; they are too tedious and I think most developers will assume you made mistakes in them anyway. :) Everyone else does... Unless your project hinges very tightly on the correctness of a state machine, I wouldn't really bother.
This book contains some good advice on what constitutes a good statement of requirements from a programmers point of view. It also has the useful guideline of not trying to set the form of your requirements too early, and a substantial piece on describing the problem you are trying to solve.
I like UI mockups based on actual program/site flows e.g registering a customer or placing order. Diagrams/pictures of GUIs with structured, consistent data examples are unambiguous.
I agree that UML and use cases are only really useful if everyone speaks UML and the projects are of sufficient complexity (few are).
You may want to read up on Agile/Scrum techniques. These are becoming a sort of standard and when properly managed can save weeks of development time.
I find that words don't do a good job of communicating how a system is supposed to work. Wireframes, white-board drawings/transition diagrams, and low-fidelity prototypes are great ways to communicate a concrete idea. One example of a low-fidelity prototype is a "clickable" paper prototype that allows a user to touch "buttons" on paper to go from one drawing to another. It costs very little time (cheaper), but goes a long way to communicate an idea between two parties.
Stay away from formal documentation, UML diagrams, or class (technical documentation) diagrams that don't speak to you. This is what large, risk-averse companies move toward to be more "mature". These are also byproducts of an idea that is hashed out, and it sounds like you're in the hashing out stage.

Abstraction or not?

The other day i stumbled onto a rather old usenet post by Linus Torwalds. It is the infamous "You are full of bull****" post when he defends his choice of using plain C for Git over something more modern.
In particular this post made me think about the enormous amount of abstraction layers that accumulate one over the other where I work. Mine is a Windows .Net environment. I must say that I like C# and the .Net environment, it really makes most things easy.
Now, I come from a very different background made of Unix technologies like C and a plethora or scripting languages; to me, also, OOP is just one, and not always the best, programming paradigm.. I often struggle (in a working kind of way, of course!) with my colleagues (one in particular), because they appear to be of the "any problem can be solved with an additional level of abstraction" church, while I'm more of the "keeping it simple" school. I think that there is a very different mental approach to the problems that maybe comes from the exposure to different cultures.
As a very simple example, for the first project I did here I needed some configuration for an application. I made a 10 rows class to load and parse a txt file to be located in the program's root dir containing colon separated key / value pairs, one per row. It worked.
In the end, to standardize the approach to the configuration problem, we now have a library to be located on every machine running each configured program that calls a service that, at startup, loads up an xml that contains the references to other xmls, one per application, that contain the configurations themselves.
Now, it is extensible and made up of fancy reusable abstractions, providers and all, but I still think that, if we one day really happen to reuse part of it, with the time taken to make it up, we can make the needed code from start or copy / past the old code and modify it.
What are your thoughts about it? Can you point out some interesting reference dealing with the problem?
Thanks
Abstraction makes it easier to construct software and understand how it is put together, but it complicates fully understanding certain issues around performance and security, because the abstraction layers introduce certain kinds of complexity.
Torvalds' position is not absurd, but he is an extremist.
Simple answer: programming languages provide data structures and ways to combine them. Use these directly at first, do not abstract. If you find you have representation invariants to maintain that are at a high risk of being broken due to a large number of usage sites possibly outside your control, then consider abstraction.
To implement this, first provide functions and convert the call sites to use them without hiding the representation. Hide the data representation only when you're satisfied your functional representation is sufficient. Make sure at this time to document the invariant being protected.
An "extreme programming" version of this: do not abstract until you have test cases that break your program. If you think the invariant can be breached, write the case that breaks it first.
Here's a similar question: https://stackoverflow.com/questions/1992279/abstraction-in-todays-languages-excited-or-sad.
I agree with #Steve Emmerson - 'Coders at Work' would give you some excellent perspective on this issue.

repeater or listview vs concatenated html

After spending a not-insignificant amount of time converting a page that used concatenated html, like
string output = "";
output +="<ul>";
foreach(MyClass item in MyItems)
{
output += "<li>"+item.Name+" - "+item.SomeProperty.ToString()+"</li>";
}
output+="</ul>";
literalPlaceHolder.Text=output;
to use the ListView control, I've just discovered that the original developer went back and converted the page back to using concatenated html. My personal feeling is that listviews and repeaters lend themselves to cleaner, more informative markup that can be edited by someone with less experience with C#, and that they are faster and use less memory. At the very least the page should be using a StringBuilder instead of a string. Anyone have a good argument for this? I have a feeling it's going to cause a major conflict when I bring this up.
I disagree with the answer above.
First on a technical point of view.
Using concatenated strings in the codebehind is clearly mixing the View with your Logic. If I would browse this application, I would wonder where the output html comes from, since the aspx (or ascx) would be empty.
Furthermore, if you are using controls like the Repeater and he is concatenating html code and just outputing it to the page, there will be no consistency between the different areas of your app, and it will just become a mess to know where to look at when a bug comes up, or when a feature has to be added.
I would suggest you simply ask him why he prefers to concatenate Html. What are his reasons to switch back your code to his way of doing without asking first.
I also disagree on a team-work point of view.
In a team, Communication is everything. Without communication, you're vowed to fail. Don't fear to communicate. when there is a question, you have to ask it and you have to make things clear. whether you win or he wins is not the point. the point is to understand eachother and to work as a team, together, not against eachother.
First of all, I think it's not very cooperative for another developer to just replace your code like that, with no discussion. You will definitely have to make a good case for your position if you want to prevail in this one.
I agree that the standard ASP.Net controls are easier for less-experienced developers to deal with, if that is a concern in your situation.
I'm not sure I agree with you about StringBuilder, which has often been the source of raging debates here and elsewhere. If your list is not lengthy, there may not be sufficient justification for a StringBuilder here.
One aspect that a seasoned developer might appreciate about this particular approach is that it is easy to step through and see exactly how each item is being populated. That isn't as easy with a ListView -- you'd have to add an event to catch the items being added, and then put a breakpoint in it.
And finally, I would encourage you to choose your battles carefully. This particular example is not a major design issue. If you foresee other, larger differences of opinion in the future, you might decide to start here with a smaller issue, to establish a way of resolving these kinds of conflicts with your fellow developer. Alternately, you could consider that this is not a big enough deal to fight about, and wait for something of major importance (and I doubt that you will have to wait long). Which way you choose to proceed depends upon the personalities involved.

When is it best to change code to match standards?

I have recently been put in charge of debugging two different programs which will eventually need to share an XML parsing script, at the minimum. One was written with PureMVC, and another was built from scratch. While it made sence, originally, to write the one from scratch (it saved a good deal of memory, but the memory problems have since been resolved).
Porting the non-PureMVC application will take a good deal of time and effort which does not need to be used, but it will make documentation and code-sharing easier. It will also lower the overall learning curve. With that in mind:
1. What should be taken into account when considering whether it is best to move things to one standard?
(On a related note)
Some of the code is a little odd. Because the interpreting App had to convert commands from one syntax to another, it made sense to have an interpreter Object. Because there needed to be communication with the external environment, it made more sense to have one object interact with the environment, and for that to deal with the interpreter exclusively.
Effectively, an anti-Singleton was created. The object would only interface with the interpreter, and that's it. If a member of another class were to try to call one of its public methods, the object would raise an Exception.
There are better ways to accomplish this, but it is definitely a bit odd. There are more standard means of accomplishing the same thing, though they often involve the creation of classes or class files which are extraordinarily large. The only solution which I could find that was standards compliant would involve as much commenting and explanation as is currently required, if not more. Considering this:
2. If some code is quirky, but effective, is it better to change it to make it less quirky, even if it is made a more unwieldy?
In my opinion this type of refactoring is often not considered in schedules and can only be done when there is extra time.
More often than not, the criterion for shipping code is if it works, not necessarily if it's the best possible code solution.
So in answer to your question, I try and refactor when I have time to do so. Priority One still remains to produce a functional piece of code.
Things to take into account:
Does it work as-is?
As Galwegian notes, this is the only criterion in many shops. However, IMO just as important is:
How skilled are the programmers who are going to maintain it? Have they ever encountered nonstandard code? Compare the cost of their time to learn it (including the cost of delayed dot releases) to the cost of your time to refactor it.
If you're maintaining it, then instead consider:
How much time will dealing with the nonstandard code cost you over the intended lifecycle of the codebase (e.g., the time between now and when the whole thing is rewritten)?
That's hard to guess, but consider that many codebases FAR outlive the usefulness envisioned by their original authors. (Y2K anyone?) I've gradually developed a sense of when a refactoring is worthwhile and when it's not, mostly by erring on the side of "not" too often and regretting it later.
Only change it if you need to be making changes anyway. But less quirky is always a good goal. Most of the time spent on a particular piece of software is in maintenance, so if you can do something to make that easier, you'll be reducing the overall time spent on that piece of code. Nonetheless, don't change something if it's working and doesn't need any modifications.
If you have time, now. If you don't have time and it can be avoided, later.

Resources