Specific to the world of programming, what does "Turtles all the way down" mean? [closed] - idioms

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I hear this phrase often and do not fully understand it's meaning. What does it mean? And if possible, is there an example?
thank you!

One use of this expression refers to a style of programming where there is a very deep call stack. You might see a method called Grobble and wonder what it does, so you open up the definition and see this:
class FooHandler
{
void Grobble(Foo foo)
{
foo.Grobble();
}
}
So then you look at Foo.Grobble:
class Foo
{
FooImpl _fooImpl;
void Grobble()
{
_fooImpl.Grobble();
}
}
That takes you too FooImpl which looks like this:
class FooImpl
{
void Grobble()
{
this.Grobble(false);
}
// etc...
}
After going deeper and deeper into the code, still unable to see the end, you are allowed to frustratedly exclaim "It's turtles all the way down!"
The reference is to the metaphor of the Earth being on the back of a turtle. What is the turtle standing on? Another turtle... etc.

This usually refers to self-hosting programming languages, where the interpreter or compiler is written in the same language that is being interpreted/compiled. It can also refer to the libraries for the language being written in the language itself.
Smalltalk and Lisp are well known for this kind of thing.

Turtles all the way down is a phrase sometimes used to refer to infinite recursion. For example, what is the least integer? There isn't one. -2 is less than -1, -3 is less than -2 and so on. There is no bottom. The original source of the quote was an answer to the question of "If the world is on the back of a turtle, what does the turtle stand on?". Anyway, it doesn't have any programming specific meaning that I know of.

It is sometimes used to refer to 'pure' object oriented (or functional) languages. In a.o. java, c#, c++, Objective c and Delphi, there are native types (int) that don't behave like objects. The illusion is maintained much better in Smalltalk.

Related

Programming language to process large data for R [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Recently I got some time to learn data visualization, as a kind of replacement for Excel's Chart. My choice is R (with ggplot2) and have started to learn it.
In "R in a nutshell", Joseph Adler put that:
Typically, I use a tool like Perl to preprocess large files before
using them in R.
I’d suggest using a scripting language like Perl, Python, or Ruby to
preprocess large, complex text files and turn them into a digestible
form. (As a side note, I usually write out lists of field names and
lengths in Excel and then use Excel formulas to create the R or Perl
code to load them.
The idea lays behind is the Unix philosophy--let each tool do his job well, and let them work together. Thus in the long run, I plan to learn:
R for visulization, and
another programming language for data
processing in the future.
The question arise as which language to learn?
I don't have a computer science background, meanwhile Perl is too difficult for me. I did some search online, found that Haskell and Clojure are pretty interesting. Since there are a lot of programmer+statisticians here, I would like to know, which one serves as large data processing purpose goes well with R?
Nick
I don't really like having too many tools in a workflow. If I can get away with just using R, I prefer that. You either end up having to manually run a few tools in series, which makes it more work to run again. Or you spend time interfacing the different tools, which takes time and introduces its own set of problems.
For a beginning programmer, just sticking with R has another advantage: you spend all your time learning one language, i.e. preventing being a jack of all trades but master of none.
I use several programming languages next to each other (R, Python, IDL, Fortran), but for data processing I tend to want to stick to pure R if I can help it.
My personal tool of choice in this space is Incanter.
It combines:
Statistical / visualisation features inspired by R
The use of Clojure as a general purpose programming language
Runs on the JVM and can access all the Java libraries: a big bonus if you want to integrate with other systems or use directly in production.
Overall it's not yet as sophisticated as R from a purely statistical perspective, but IMHO Clojure is a much nicer and more capable general purpose language. The whole package is therefore more useful if you want to build production apps using the data.
I would go with python, mostly because:
It's easier to read/understand
The R-python bridge lets you integrate the two languages very easily.

What exactly is 'How to do and What to do' or 'Focus on results, not steps' in functional programming [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm having a difficult time understanding the concept 'Don't think about How to do but What to do' (Focus on results, not steps) in functional programming. Let's say we have a language that treats functions as first class citizens and no inbuilt functions for iteration (such as forAll in Scala). In that case we first have to create a function that tells how to iterate a given data structure haven't we? So if the language it self does not provide enough functions, then apart from having functions as first class citizens, it would be pretty much same as coding in imperative way wouldn't it?
Please correct me if I'm wrong. Following are the resources that I referred.
Video lecture
Some articles
"How" and "what" are two sides of the same coin, and sometimes it can be useful to think in one way or another. Thinking about "what" to do is a good way to think about recursion, especially for people without much experience writing recursive functions. "How" implies a series of steps, while "what" implies a series of definitions. Let's consider your iteration example (I'm going to use Haskell, since I don't know Scala, but the concepts ought to be directly translatable).
Haskell's iteration function is called map, but suppose we wanted to write it ourselves. If we didn't have much experience with recursion, it might be difficult to imagine how to write a function (map) that applies another function (f) to every element of a list ("mapping" f over list). Here's the type signature of the function we want:
map :: (a -> b) -> [a] -> [b]
So let's try thinking about "what" a function applied to every element of a list is. It is the function applied to the first element, followed by the function mapped over the rest of the list. So let's code that:
map f (firstElement:restOfList) = (f firstElement):(map f restOfList)
Now we're nearly finished. The only thing left is to handle the base case. What if the list is empty? Well clearly any function mapped over an empty list is an empty list, so we'll code that:
map _ [] = []
And we're done! Now if you can think in terms of "how" and write the above code, go right ahead (as I've gained more experience, I tend to do that more often). However thinking in terms of "what" is a useful technique if you find yourself stuck.

Have anyone tried to break a bit even smaller? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I was reading a book regarding to learn more about ASM, and the author happened to commented on bits, the exact quote is:
A bit is the indivisible atom of information. There is no half-a-bit, and no bit-and-a-half. (This has been tried. It works badly. But that didn't stop it from being tried.)
My question is when have this been tried? What was the outcome? How did it go badly? It bothering me that google isn't helping me find the answer to this question regarding on the cases when someone tried to make a half a bit and use(?) it.
Thank if you can find out when this case happened.
Yes. That's what arithmetic coding (a type of compression) is about. It allows information to be stored in fractional bits.
I believe that in the specific example you're talking about, that the author was merely being tongue in cheek, and not referring to any actual attempt to split bits.
A bit, as defined by present day computers, is a binary value 0 or 1. That is the 'atom' of information, because in binary logic you cannot represent anything other than that using a single 'bit' - to represent anything else, like 0.5, you need more 'bits'.
However for multilevel electronics, the 'bit', would have multiple values. If someone makes a computer, which has electronics where each 'bit' can take value between 0-9, then you have a bit that can store more than just 0/1. Perhaps the author meant this. Attempts to make computers with multi level bits have failed, 'miserably'. Electronics has not been able to figure out how to do that, in a reliable/cost effective fashion. e.g. if someone can figure that out, then say a 1024 bits memory would have a single cell, the cells taking on a value ranging from 0-1023 to signify the value. That chip would then by 1024 times smaller than the current chips (just theoretically - if everything else remains the constant).
Though admittedly at a physical level, a bit would still remain as a bit. That is 1 wire going into a chip. That is 1 gate input. That is 1 memory cell. If you divide that 1 wire, 1 input, or that one cell into two, you get two wires/inputs/cells, NOT half wire/input/cell. So you get two bits.
I believe the author tries to state a metaphysical fact with humour.
Data is commonly stored using multilevel voltages in magnetic discs and flash memory. However one can calculate the "optimal" base of a number system being 'e=exp(1)=~2.718...', which AFAIK hasn't been "tried", while ternary (base-3) system is quite common in fast parallel arithmetic algorithms and it works better than base-2 in many applications.
Also, as omnifarious states, arithmetic/range encoding can be seen as a method of using fractional bits: e.g. if there are only three possible messages (e.g. 001, 010, 100), those can be stored in two bits "leaving one quarter of the space" unused.

VB.NET Application Blocks [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What are the limitations of using ApplicationBlocks (An Introduction and Overview of the Microsoft Application Blocks) for ASP.NET/VB.NET applications? I have found lots of websites that talk about the benefits e.g. divorcing the data tier from the web tier, but I cannot find a web page that discusses the limitations.
I don't think you can really get a plain list of disadvantages. Microsoft Enterprise Library is a good library, well documented, rich and with tons of features.
You should change your question to "When I do not need to use it?". Of course this question should be repeated for each block. I'll try to summarize a little bit.
For every block you should consider to do not use the library when you do not need its complexity. Features doesn't come without a cost and the most obvious one is complexity (first in deploying and configuration). If you have to document and your user have to change application's configuration you may need to provide some tool or a lot of documentation. Complexity can be hidden in code too, even if EL designers tried to make everything easy it can't be as easy as a raw solution.
Second important disadvantage is obviously the speed. Layers of abstraction can't be for free and you'll pay a speed cost for that. In some cases you may do not care (simple Logging, for example) but it may be a problem in other cases (so again the answer is "it depends"). Think, for example, to Unity Application Block: you'll get all the power of injection but you'll pay a great cost for this.
So when you should use it? In my opinion a big goal of this library is that you do not need to use it all together. You can pick blocks you need when you need them. It's very common, for example, to use Logging and Exception Handling but you may not need Unity in whole your life. The Data Access Application Block is a very thin layer above ADO, it simplifies many common tasks but you do not gain the level of abstraction you have, for example, with LINQ to SQL or Entities (hey, do not forget they have very different purposes) and you should consider to use it only if everything else can't suit what you need.
So, finally, in my opinion you should consider each block and use it only and only if you really need all the complexity that comes with an Enterprise level library. It's not designed for small single user applications (even if sometimes you may find that some Application Blocks may be great for a specific task). Its drawbacks aren't little: complexity and speed can't be ignored (both for short term solution and long term maintenance plans). Moreover if you really need all its power you'll find it's not as easy as a "ready-to-go" solution, to have the control you'll need to write more code.

What is abstraction? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I see abstraction in processes.
I see abstraction in data.
I see that abstraction is losing the unimportant details.
I see that abstraction is giving a group of elements a name and treating them as one unit. (But I don't know why that is considered abstraction. So, please I need clarification on this particular point)
I know there are also levels of abstraction, and although the name implies something, I don't have a practical example, and I can't think of a specific one I'm confused about the definition of abstraction.
Can somebody write a comprehensive article? Scratch that. Can somebody give a comprehensive answer?
EDIT:-
Thank you for your answers.
However, I was looking for a generalized answer.
For example, I'm reading an article in which procedures are considered abstractions.
However, here the answers are about abstract classes in C# and Java, (until now.)
Thank you again.
Abstraction is the technique of hiding implementation. At it's core there's not much more to that answer. The bulk of meaning to abstraction come from how and why it is used.
It is used for the following scenarios
Reduce complexity. (Create a simple interface)
Allow for implementation to be modified without impacting its users.
Create a common interface to support polymorphism (treating all implementations of the abstracted layer the same.
Force users to extend the implementation rather than modify.
Support cross platform by changing the implementation per platform.
Quite simply, abstraction is the art of limiting your dependencies. The less you depend on, the more abstract you are. For example, if you write a DirectX renderer, then you're abstracted from what graphics card vendor and model you're running on. If you write another layer, you can be insulated from what OS you're running on.
Abstraction is hiding details of specific implementations and share common details among implementations. Example is java.util.List, java.util.ArrayList and java.util.Map. List is the parent (the abstraction), ArrayList and Map are specific implementation.
You want to do this whenever you have shared code between different classes, so that you don't repeat your code which is bad.
Abstraction is very useful in code reuse, dynamic behavior and standardization. For example, there is a method that you are using and it accepts a List, so to use this method, you can send any object that has list as its parent. Now inside this method, there could be different implementations depending on what is the type of the passed object, so you can achieve a dynamic behavior at run-time. This is very useful technique when you design a framework.
I am not sure if you are supposed to recommend books and if so let me know and I will delete my post but I like Pro C# by Troelsen. Abstraction is like an Interfaces but Interfaces do not allow you to define constructor(s). It is for generalizing. Like I have a grid I want to display some user fields in. The fields can be text, enumeration, single-value, multi-value. I built an abstract class FieldDef with an abstract string property DispValue. Then the various field type inherit from FieldDef. In the grid I have simple string to display. Then when the user updates a field properties and methods specific to the field type are exposed. The other example is all mammals have common proprieties but as you drill down you expose more properties but there is a single generalized view (interface) for all mammals and by inheriting from mammals there is a way to search across and display properties common to all mammals.

Resources