Goto is considered harmful, but did anyone attempt to make code using goto re-usable and maintainable? - goto

Everyone is aware of Dijkstra's Letters to the editor: go to statement considered harmful (also here .html transcript and here .pdf). I was wondering is anyone attempted to find a way to make code using goto's re-usable and maintainable and not-harmful by adding any other language extensions or developing a language which allows for gotos.
The reason I ask the question is that it occurs to me that code written in Assembly language often used goto's and global variables to make the program work well within a limited space. The Atari 2600 which had 128 bytes of ram and the program was loaded from ROM cartridge. In this case, it was better to use unstructured programming and to make the most of the freedoms this allows to make the most of a very limited space for the program.
When you compare this with a game programmed today without the use of gotos, the game takes up much more space.
Then it occurs to me that perhaps its possible to program with the use of gotos if some rules or other language changes are made to support this, then the negative effects of gotos could be reduced or eliminated. Has anyone tried to find a way to make goto's NOT considered harmful by creating a language or some rules to follow which allow gotos to be not harmful.
If no-one looked for a way to use gotos in a non-harmful way then perhaps we adopted structured programming un-necessarily based solely on this paper? Perhaps there is another solution which allows for the use of gotos without the down-side.

Comparing gotos to structured programming is comparing a situation where the programmer has to remember what every labels in the code actually mean and do, and where there are, to a situation where the conditional branches are explicitly described.
As of the advantages of the goto statement regarding the place a program might take, I think that games today are big because of the graphic and sound resources they use. That is, show 1,000,000 polygons. The cost of a goto compared to that is totally neglectable.
Moreover, the structural statements are ultimately compiled into goto ("jmp") statements by the compiler when outputting assembly.
To answer the question, it might be possible to make goto less harmful by creating naming and syntax conventions. Enforcing these conventions into rules is however pretty much what structural programming does.
Linus Torvald argued once that goto can make source code clearer, but goto is useful in so very special cases that I would not dare use it as a programmer.

This question is somehow related to yours, since I think this one of the most common situations where a goto is needed.

Related

Abstraction or not?

The other day i stumbled onto a rather old usenet post by Linus Torwalds. It is the infamous "You are full of bull****" post when he defends his choice of using plain C for Git over something more modern.
In particular this post made me think about the enormous amount of abstraction layers that accumulate one over the other where I work. Mine is a Windows .Net environment. I must say that I like C# and the .Net environment, it really makes most things easy.
Now, I come from a very different background made of Unix technologies like C and a plethora or scripting languages; to me, also, OOP is just one, and not always the best, programming paradigm.. I often struggle (in a working kind of way, of course!) with my colleagues (one in particular), because they appear to be of the "any problem can be solved with an additional level of abstraction" church, while I'm more of the "keeping it simple" school. I think that there is a very different mental approach to the problems that maybe comes from the exposure to different cultures.
As a very simple example, for the first project I did here I needed some configuration for an application. I made a 10 rows class to load and parse a txt file to be located in the program's root dir containing colon separated key / value pairs, one per row. It worked.
In the end, to standardize the approach to the configuration problem, we now have a library to be located on every machine running each configured program that calls a service that, at startup, loads up an xml that contains the references to other xmls, one per application, that contain the configurations themselves.
Now, it is extensible and made up of fancy reusable abstractions, providers and all, but I still think that, if we one day really happen to reuse part of it, with the time taken to make it up, we can make the needed code from start or copy / past the old code and modify it.
What are your thoughts about it? Can you point out some interesting reference dealing with the problem?
Thanks
Abstraction makes it easier to construct software and understand how it is put together, but it complicates fully understanding certain issues around performance and security, because the abstraction layers introduce certain kinds of complexity.
Torvalds' position is not absurd, but he is an extremist.
Simple answer: programming languages provide data structures and ways to combine them. Use these directly at first, do not abstract. If you find you have representation invariants to maintain that are at a high risk of being broken due to a large number of usage sites possibly outside your control, then consider abstraction.
To implement this, first provide functions and convert the call sites to use them without hiding the representation. Hide the data representation only when you're satisfied your functional representation is sufficient. Make sure at this time to document the invariant being protected.
An "extreme programming" version of this: do not abstract until you have test cases that break your program. If you think the invariant can be breached, write the case that breaks it first.
Here's a similar question: https://stackoverflow.com/questions/1992279/abstraction-in-todays-languages-excited-or-sad.
I agree with #Steve Emmerson - 'Coders at Work' would give you some excellent perspective on this issue.

How do I use Declarations (type, inline, optimize) in Scheme?

How do I declare the types of the parameters in order to circumvent type checking?
How do I optimize the speed to tell the compiler to run the function as fast as possible like (optimize speed (safety 0))?
How do I make an inline function in Scheme?
How do I use an unboxed representation of a data object?
And finally are any of these important or necessary? Can I depend on my compiler to make these optimizations?
thanks,
kunjaan.
You can't do any of these in any portable way.
You can get a "sort of" inlining using macros, but it's almost always to try to do that. People who write Scheme (or any other language) compilers are usually much better than you in deciding when it is best to inline a function.
You can't make values unboxed; some Scheme compilers will do that as an optimization, but not in any way that is visible (because it is an optimization -- so it should preserve the semantics).
As for your last question, an answer is very subjective. Some people cannot sleep at night without knowing exactly how many CPU cycles some function uses. Some people don't care and are fine with trusting the compiler to optimize things reasonably well. At least at the stages where you're more of a student of the language and less of an implementor, it is better to stick to the latter group.
If you want to help out the compiler, consider reducing top level definitions where possible.
If the compiler sees a function at top-level, it's very hard for it to guess how that function might be used or modified by the program.
If a function is defined within the scope of a function that uses it, the compiler's job becomes much simpler.
There is a section about this in the Chez Scheme manual:
http://www.scheme.com/csug7/use.html#./use:h4
Apparently Chez is one of the fastest Scheme implementations there is. If it needs this sort of "guidance" to make good optimizations, I suspect other implementations can't live without it either (or they just ignore it all together).

When is it best to change code to match standards?

I have recently been put in charge of debugging two different programs which will eventually need to share an XML parsing script, at the minimum. One was written with PureMVC, and another was built from scratch. While it made sence, originally, to write the one from scratch (it saved a good deal of memory, but the memory problems have since been resolved).
Porting the non-PureMVC application will take a good deal of time and effort which does not need to be used, but it will make documentation and code-sharing easier. It will also lower the overall learning curve. With that in mind:
1. What should be taken into account when considering whether it is best to move things to one standard?
(On a related note)
Some of the code is a little odd. Because the interpreting App had to convert commands from one syntax to another, it made sense to have an interpreter Object. Because there needed to be communication with the external environment, it made more sense to have one object interact with the environment, and for that to deal with the interpreter exclusively.
Effectively, an anti-Singleton was created. The object would only interface with the interpreter, and that's it. If a member of another class were to try to call one of its public methods, the object would raise an Exception.
There are better ways to accomplish this, but it is definitely a bit odd. There are more standard means of accomplishing the same thing, though they often involve the creation of classes or class files which are extraordinarily large. The only solution which I could find that was standards compliant would involve as much commenting and explanation as is currently required, if not more. Considering this:
2. If some code is quirky, but effective, is it better to change it to make it less quirky, even if it is made a more unwieldy?
In my opinion this type of refactoring is often not considered in schedules and can only be done when there is extra time.
More often than not, the criterion for shipping code is if it works, not necessarily if it's the best possible code solution.
So in answer to your question, I try and refactor when I have time to do so. Priority One still remains to produce a functional piece of code.
Things to take into account:
Does it work as-is?
As Galwegian notes, this is the only criterion in many shops. However, IMO just as important is:
How skilled are the programmers who are going to maintain it? Have they ever encountered nonstandard code? Compare the cost of their time to learn it (including the cost of delayed dot releases) to the cost of your time to refactor it.
If you're maintaining it, then instead consider:
How much time will dealing with the nonstandard code cost you over the intended lifecycle of the codebase (e.g., the time between now and when the whole thing is rewritten)?
That's hard to guess, but consider that many codebases FAR outlive the usefulness envisioned by their original authors. (Y2K anyone?) I've gradually developed a sense of when a refactoring is worthwhile and when it's not, mostly by erring on the side of "not" too often and regretting it later.
Only change it if you need to be making changes anyway. But less quirky is always a good goal. Most of the time spent on a particular piece of software is in maintenance, so if you can do something to make that easier, you'll be reducing the overall time spent on that piece of code. Nonetheless, don't change something if it's working and doesn't need any modifications.
If you have time, now. If you don't have time and it can be avoided, later.

I am compiling a rules of programming mindset for my team: What are yours? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Closed 3 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have been working on a list for a while that helps me share the why of programming approach and thought as much as how to do something.
For this, I wanted to build a list of things that are:
best practice,
best thought,
best approach...
that help a programmer's ability to analyze, think, approach, solve and implement in the most effective way.
I have seen dozens of incredibly valuable comments in questions throughout Stack Overflow, but I couldn't find a place where we keep them together. There is the most controversial opinion on Stack Overflow. However, I'm just looking for sagely insights that can be shared and help my team, and I approach and solve problems better through better programming.
Hopefully this can be one place to gather the one or two liners that are concise, profound and easy to share, repeat, review. If we keep it to one rule per answer it might be easiest to vote up/down.
I'll start with the first.
DRY - Don't Repeat Yourself - In code, comments or documentation.
Always leave the code a little better than when you found it.
Code does not exist until entered into a versioning control system.
Don't be afraid to admit "I don't know" and ask.
10 minutes asking someone could save a day pulling your hair out!
KISS - Keep it simple, stupid.
Pick the simplest solution that works.
Don't make things (too) complicated before they need to be.
Just because everyone else is using some complicated framework to solve their problem, doesn't mean you have to.
Don't reinvent the wheel
If there ought to be a function for it in the core library - there probably is.
Maintainability is important.
Write code as if the person who will end up maintaining it is crazy and knows where you live.
Someone else won't fix it.
If a problem comes to your attention, take ownership long enough to ensure it will be taken care of one way or another.
Don't optimize unless there's a demonstrable problem.
Most of the time when people try to optimize code before it's been proved necessary, they'll spend a lot of resources, make the code harder to read and maintain, and achieve no noticeable effect. Sometimes they'll even make it worse.
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
- Donald Knuth
How hard can it be?
Don't let any problem intimidate you.
Don't Gather Requirements -- Dig for Them
Requirements rarely lie on the surface. They're buried deep beneath layers of assumptions, misconceptions, and politics
via The Pragmatic Programmer
Follow the SOLID principles:
Single Responsibility Principle (SRP)
There should never be more than one reason for a class to change.
Open-Closed Principle (OCP)
Software entities (classes, modules, functions, etc.)
should be open for extension, but closed for
modification.
Liskov Substitution Principle (LSP)
Functions that use pointers or references to base
classes must be able to use objects of derived classes
without knowing it.
Interface Segregation Principle (ISP)
Clients should not be forced to depend upon interfaces
that they do not use.
Dependency Inversion Principle (DIP)
A. High level modules should not depend upon low
level modules. Both should depend upon abstractions.
B. Abstractions should not depend upon details. Details
should depend upon abstractions.
Best Practice: Use your brain
Don't follow any trend/principle/pattern without thinking about it
I think almost everything that is listed under "The Zen of Python" applies for every "Rules of Programming Mindset" list. Start with 'python -c "import this"':
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Test Driven Development (TDD) makes coders sleep better at night
Just to clarify: Some people seem to think TDD is just an incompetent coder's way of limping from A to B without borking everything up too much, and that if you know what you're doing, that means there is no need for (unit) testing methodologies. That completely misses the point of Test Driven Development. TDD is about three (update: apparently four) things:
Refactoring magic. Having a full set of tests means you can make otherwise insane refactoring stunts, juggling the entire structure of your application without missing even one of the two hundred crazy subtle side effects that result from it. Even the best programmers are reluctant to refactor their core classes and interfaces without good (unit) test coverage, because it's damn near impossible to track down all the little 'ripple effects' it causes without them.
Detecting pitfalls early. If you are writing tests the right way, it means forcing yourself to consider all the fringe cases. Often, this leads to better design choices once the actual development begins, because the coder has already considered some of the trickier situations that may call for a different inheritance structure or a more flexible design pattern. The need for these changes is often not apparent - or intuitive - during initial planning and analysis, but those exact changes can make the application much easier to extend and maintain down the line.
Ensuring that tests get written. TDD requires you to write the tests before writing the code. Sure, that can be a pain in the ass, since writing tests is tedious compared to writing actual code - and often takes longer, too. However, doing so is the only way to make sure the tests will be written at all. If you think you'll remember to write the tests once the code is done, you're almost always wrong.
Forcing you to write better code. Since TDD forces all code to be testable (you don't write code before there is a test for it), it requires you write more decoupled code so that you can test the components in isolation. So TDD forces you to write better code. (Thanks, Esko)
Google before you ask your colleague and interrupt his coding.
Less code is better than more, as long as it makes more sense than lots of code.
Habits of the lazy coder
The first time you are asked to do something, do it (right).
The second time you are asked to do it, make a tool that does it automatically.
And the third time, if the tool doesn't cut it, design a domain specific language for generating more tools.
(not to be taken too seriously)
Be a Catalyst for Change
You can't force change on people. Instead, show them how the future might be and help them participate in creating it.
via The Pragmatic Programmer
Don't Panic When Debugging
Take a deep breath and THINK! about what could be causing the bug.
via The Pragmatic Programmer
You may copy and paste to get it working, but you may not leave it that way.
Duplicated code is an intermediate step, not a final product.
It's Both What You Say and the Way You Say It
There's no point in having great ideas if you don't communicate them effectively.
via The Pragmatic Programmer
Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.
From: Coding Horror
Build Breaker Buys Lunch
Publish Early, Publish Often
Build it correct first. Make it fast second.
Frequently conduct code reviews
Code review and consequently refactoring is an ongoing task. Here is a few goodies about code review in my opinion:
It improves code quality.
It helps refactor reusable codes into reusable libraries.
It helps you learn from your fellow developers.
It helps you learn from your mistakes and refresh your memory about a genius code you have written before.
Anything that could affect how the application runs should be treated as code, and that means putting it in version control. Especially build scripts and database schema and data (.sql) files.
Take part in open source development
If you are using open source code in your projects, remember to post your bugfixes and improvements back to the community. It's not a development best practice per se, but it's definitely a programmer mindset to strive for.
Understand the tools you use
Don't use a pattern until you've understood why you're using it; don't use a tool without knowing why; don't rely on your framework or language designer always being right for your situation, but also don't assume they're wrong until proven to be!
Convention over Configuration
Especially where conventions are strong and some flexibility can be sacrificed.

What are the practical advantages of learning Assembly? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
Most people suggest that learning assembly is essential, its important to know the underlying workings of the computer, and so forth. But what I'm looking for are some practical suggestions that will make the effort of learning Assembly to be worth it.
What are your suggestions? What am I missing out on by not learning Assembly and pointers/memory management in general?
I think the main practical advantage to learning low-level things like assembly language, pointers, and memory management is that when you're writing or reviewing high-level code you're better able to instinctively or subconsciously spot performance issues or other pitfalls.
An average developer, might write a simple loop and think, "This code iterates over a set of integers and writes each to the console."
An expert developer might write the same loop and think, "This code iterates over a set of integers, and has to box each element to call the ToString method and ToString has to format the string in base 10 which is somewhat non-trivial, and then both the boxed integer and the formatted string will soon be eligible for garbage collection as no references will remain, and the first time this method runs, it will need to be JIT'ed..." and so on.
9 times out of 10, it may not matter. But that 1 time out of 10, the expert developer is likely to notice a problem in code that the average developer would never think to consider.
Pointers/memory management are more general than assembly language. You need to understand them for C and C++ as well, which you might need if you have to maintain code written in C.
For assembly language, it is sometimes useful to read the assembler code that the C compiler generates, to find out whether it generates correct and efficient code.
You need to learn to read assembly so you can figure out what goes wrong when a complex statement bombs out. The CPU debug window shouldn't be a mysterious place.
This is sort of one of those questions that will always be asked: "Why should I know anything." etc. Well, perhaps you could get a job doing something besides building the next generic CRUD application or something like that. If you want to do any sort of system development, having a working knowledge of assembly is very helpful, if not vital. As far as what you're "missing out" perhaps you are missing out on actually knowing how computers work. Some people think this is desirable. Some people don't. Some people build processors. Some people dig ditches. It's all a matter of personal preference :)
I think it's great to learn new languages. It opens my mind. Some languages are more mind-opening than others. I'd say assembler is one of those. It forces you to think about stuff like the call stack and instruction pointer. And it'll make you appreciate higher level languages even more. Another fun language to learn is PostScript.
I don't think you need to learn assembly for anything practical. However, it will ensure that you understand the real roots of what you are doing as a developer. In essence, assembly programming is a discipline for learning chip logic and architecture. I haven't programmed assembly in over two decades but it still informs the kinds of choices I make when programming C#.
But what I'm looking for are some practical suggestions that will make the effort of learning Assembly to be worth it.
Learn what assembly is.
Really learn how to read (and understand) small fragments of it: how to walk/step through it in your mind.
Perhaps too, step through some of it with a debugger (including seeing memory and registers being changed).
Ideally, find some annotated assembly.
But, don't bother to learn how to write assembly: instead, learning to write C or C++ is probably 'low' enough for most practical purposes.
Well, on a practical level I did a class in 6502 assembler when I was first learning to code the early 80s. I also did some 8088 assembler. It's been of occasional use of the years since but I can't say it's ever really got my out of a hole on more than one or two occasions in 25 years. Groking C at a pretty fundamental level is of far more use. YMMV and it's certainly helpful as background, but as a direct practical benefit? Marginal really.
Perversely though one thing that has proved useful is at an even lower level. I did a class on chip design (NAND gates and the like) and as part of that was taught formal Boolean logic at some depth. That's been massively useful ever since - it's surprising the number of coders who don't really know what they are doing with ands, ors and nots :-)
Pointers and memory management are really a different question than assembly. If you want to do C/C++, then you need to learn pointers and memory management, because those are part of the language. But, even if you plan to use nothing but (say) Java all your life, you should learn something about memory management to keep from writing a memory leak despite the GC, and pointers are just the difference between atomic types and object references. You need the concepts or you'll write programs that don't work!
Practical reasons for learning assembly: debugging and optimization. Even if you don't write any assembly, one of these days you may need to optimize C/C++ code for performance. In that case, you'll need to be able to read the assembly for your inner loop, even if you never need to write another line of it.
Ultimately, I think your distinction between "knowing the underlying workings of your computer" and "practical suggestions that will make the effort of learning assembly worth it" is a false one. Ignorance does not pay. Learning how your computer works is a practical suggestion worth the effort!
I have a prophecy: someday soon, your program will run far too slowly to be practical, and crash intermittently with an out-of-memory exception. On that day, the sheer screaming anxiety of not knowing what the hell is going on or where to start looking in order to fix it will refund your karma debt, with interest...
These days many assembly languages are actually fairly high level.
And it's always been true that if you learn 'C', that's close enough to assembly to get most of the learning benefits.
edit: thinking about this a bit more, in Knuth's books he describes an idealised assembly language. You won't go far wrong learning that, and reading those books.
Another practical reason I can think of is reverse engineering application code to modify it for educational purposes ONLY, since this is widely used by crackers to bypass shareware application protections like time-limit or serial numbers.
An application like win32Dasm can convert executables into assembly code that can later be modified with a Hex editor like hiew. You can learn quite a lot about the flow of the program.
I think learning about computer architecture, in conjunction of assembly, would open your mind quite a bit.
It would help explain lots of performance issues - e.g. parser's slow because there's lots of branches, and pipeline gets flushed very easily, branch predictor cannot compensate for everything.
Also, different architectures have their quirks. Someone talked about an assembly trick to swap 2 registers in place, involving xor's. It works, and it would run great for in-order execution core (most recent example would be the Intel Atom, and the Via C7 in netbooks), but not so great in out-of-order cores.
Knowing that may help you to detect poorly compiled code by inspecting it in assembly, and possibly be able to write code in higher-level language to sidestep the imperfection of compiler optimizers. I'm not trying to diss them, but they just can't be perfectly in tuned.
The biggest practical advantage to learning Assembly is performance. You can optimize to near perfection when its required.

Resources