What does it mean if someone refers to something as BootStrap? - css

I hear the term "BootStrap" thrown around a lot, but I'm not really sure what it refers to. I know there is a bootstrap CSS, but what exactly does the term mean?

Literally, a bootstrap is a tab on the sides or back of boots that helps you to pull them on. Putting on your shoes or boots is usually the last step of getting dressed; similarly, in programming it's been applied to the initialization or start-up step of a program.
See also the Wikipedia entry for bootstrapping:
Bootstrapping or booting refers to a group of metaphors which refer to a self-sustaining process that proceeds without external help.
[.. in Software Loading] booting is the process of starting a computer, specifically in regards to starting its software. The process involves a chain of stages, in which at each stage a smaller simpler program loads and then executes the larger more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps", i.e. it improves itself by its own efforts
[.. in Software Development] bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.
A shoehorn is another means to help you don footwear but it's idiomatically come to mean cramming something into a tight space.

In computer science Bootstrap (or more commonly "Boot") generally refers to the setup/start/initialization step of a process. It can mean many things depending on the context: starting a physical machine, setting up variables and services for an application to use, or even laying the css groundwork for a website to implement.

Bootstrapping let you create your own most complex design by just minimal configuration, rather than develop it from the scratch.

Related

What is the main idea of OpenFOAM?

I just want to get the main idea/principle of openFOAM and how you create a simulation, please let me know where I go wrong,
So basically you have a object that interacts with gas or liquid and you want to simulate this, so you create model of the object, mesh it, specify where the gas will flow in and out and what are the walls, and set the other correct parameters and then run the program (with the approprate time step etc)?
OpenFOAM is an open source C++ library which implements the finite volume method (FVM), which is widely used in CFD.
What you have explained is a vague understanding of some of the applications of CFD. Those things you specified might not always be the case (i.e. the fluid might not necessarily be (a) gas and so on.
The main stages of a CFD problem are: making the geometry - mesh generation - preprocess - solving - postprocess.
There might be more stages added depending on the resolution and other specifics of the case.
Now OpenFoam is an open source (free for all) tool which is in C++ and helps solve the CFD problems. If the problem is simple and routine, and you have access to a commercial solver such as ANSYS fluent, then you can use that since it is easier and much less work if the problem is not specific. However, if the problem is specific and there are customized criteria OpenFoam is a nice tool.
It is written in C++ thus it is object oriented and also there are many many different solvers already written and available to use, so you will not have to write all the schemes and everything on your own from scratch.
However, my main advice to you is to read more about CFD to have a clear understanding, there are tens of good books avaiable.

Beginner Software RE help, RAM Addresses, library loading, where to start?

To start this off, I use OS X which is a UNIX based system.
I have beginner theoretical knowledge in C++ and would like to expand my knowledge by software reverse engineering. Every guide I get into seems to jump in half way and I seem to miss a giant gap of information required to start up. What my end goal is to successfully build a working dylib for any application. Where do i start with learning Ram addresses, how do they work, how are libraries loaded and what the actual hell do I start reading, what subject. Everything I've learned so far has a distinct start point, certain syntax but I just find everything uses terminology I don't understand. I find myself just branching off more and more because an article used a keyword I don't understand, I google it, the other one uses five I don't understand and I just get stuck. The application under question does not have changing memory addresses, but I would also like to learn how to compensate for it using offsets.
Where do I start?!
Before you get started with reverse engeneering you'll need more than a theoretical knowledge of the C language. Forget C++ for now, C is simpler and it's so low level that once you master it you'll understand how programs work under the hood. Get you a copy of The C Programming Language by Brian Kernighan and Dennis Ritchie and go through the whole book.
Once you feel confortable writting C programs get familiar with UNIX standards like POSIX and then move on to OS X specific stuff. The best resource for OS X programming is http://developer.apple.com. The link below explains how dynamic libraries work on OS X. Once you get a good understanding of C it will all make sense.
https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/OverviewOfDynamicLibraries.html

Adding calculation power to flex application

I have been tasked with making several flex-driven visualizations of immense excel spreadsheets. The client wants the finished product to be as self-contained as possible. The problem being that flex doesn't offer as much computing horsepower as is needed. Is there an easy (or not easy) way to accomplish this. I am just trolling for pointers. Thanks in advance!
If you dont mind doing it the hard way, I have two options for you:
Pixel Bender: a tool originally designed for creating complex and CPU-intensive graphic filters and offload those calculations to the hardware. But it can be used for number crunching too. Here's an article that covers that topic: Using Pixel Bender with Flash Builder 4 as a number crunching engine. The language may not be like anything you're used to. I had a hard time wrapping my head around it.
Alchemy: a tool that compiles C or C++ code so it can be executed in the Flash VM. I am not certain how much performance can be gained for simple number crunching, but if you know C, this might be a path to investigate.
The first thing that comes into my mind - building a webservice that will do the hard work. But this is not a self-contained product though.
Apart from that - take a look at the apparat - http://code.google.com/p/apparat, it allows various optimizations, access to the low level AVM2 code - http://code.google.com/p/apparat/wiki/AsmExpansion and more. I do not think that as3 and flex compiler is so bad for math. Try to write the sample math function and test it using different languages.

Use cases for self-modifying code?

On a Von Neumann architecture, program and data are both stored in memory, so a program can modify itself. Is this useful for a programmer? Could you give some examples?
Metamorphism
One (questionable) use case that comes to my mind is metamorphic computer viruses. These are malicious pieces of software that conceal themselves from signature based detection by rewriting their own machine code to an semantically equivalent representation that looks different.
Trampolining
Another (more complex, but also more common) use case is trampolining, a technique based on dynamic code generation to solve certain problems with nested function calls.
JIT compilation
The most common usage of dynamic code generation that I can think of is JIT (just-in-time) compilation. Modern languages like .NET or Java are not compiled into native machine code, but into some kind of intermediate language (called bytecode). This bytecode is then interpreted when the program is executed (by a virtual machine written for the target architecture). At the same time, a background process checks which parts of the code are executed very often. These parts then have a good chance of being dynamically compiled into native machine language for maximum performance. All this happens during the run time of the program!
Security implications
One thing to keep in mind is that the possibility to interpret data as code is useful for exploiting security holes in computer software, which is why the trend in modern hardware and operating systems is to enable and, if possible, even enforce the separation of code and data (also see NX bit and DEP).
I can best answer this by referring you to an answer to a similar (exceptionally well written and answered) question, also on StackOverflow - Homoiconic and "unrestricted" self modifying code + Is lisp really self modifying?. The answer focuses on Lisp, a family languages known for taking "code is data" to the next level, and explores the uses of that in AI.

Abstraction or not?

The other day i stumbled onto a rather old usenet post by Linus Torwalds. It is the infamous "You are full of bull****" post when he defends his choice of using plain C for Git over something more modern.
In particular this post made me think about the enormous amount of abstraction layers that accumulate one over the other where I work. Mine is a Windows .Net environment. I must say that I like C# and the .Net environment, it really makes most things easy.
Now, I come from a very different background made of Unix technologies like C and a plethora or scripting languages; to me, also, OOP is just one, and not always the best, programming paradigm.. I often struggle (in a working kind of way, of course!) with my colleagues (one in particular), because they appear to be of the "any problem can be solved with an additional level of abstraction" church, while I'm more of the "keeping it simple" school. I think that there is a very different mental approach to the problems that maybe comes from the exposure to different cultures.
As a very simple example, for the first project I did here I needed some configuration for an application. I made a 10 rows class to load and parse a txt file to be located in the program's root dir containing colon separated key / value pairs, one per row. It worked.
In the end, to standardize the approach to the configuration problem, we now have a library to be located on every machine running each configured program that calls a service that, at startup, loads up an xml that contains the references to other xmls, one per application, that contain the configurations themselves.
Now, it is extensible and made up of fancy reusable abstractions, providers and all, but I still think that, if we one day really happen to reuse part of it, with the time taken to make it up, we can make the needed code from start or copy / past the old code and modify it.
What are your thoughts about it? Can you point out some interesting reference dealing with the problem?
Thanks
Abstraction makes it easier to construct software and understand how it is put together, but it complicates fully understanding certain issues around performance and security, because the abstraction layers introduce certain kinds of complexity.
Torvalds' position is not absurd, but he is an extremist.
Simple answer: programming languages provide data structures and ways to combine them. Use these directly at first, do not abstract. If you find you have representation invariants to maintain that are at a high risk of being broken due to a large number of usage sites possibly outside your control, then consider abstraction.
To implement this, first provide functions and convert the call sites to use them without hiding the representation. Hide the data representation only when you're satisfied your functional representation is sufficient. Make sure at this time to document the invariant being protected.
An "extreme programming" version of this: do not abstract until you have test cases that break your program. If you think the invariant can be breached, write the case that breaks it first.
Here's a similar question: https://stackoverflow.com/questions/1992279/abstraction-in-todays-languages-excited-or-sad.
I agree with #Steve Emmerson - 'Coders at Work' would give you some excellent perspective on this issue.

Resources