Have anyone tried to break a bit even smaller? [closed] - math

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I was reading a book regarding to learn more about ASM, and the author happened to commented on bits, the exact quote is:
A bit is the indivisible atom of information. There is no half-a-bit, and no bit-and-a-half. (This has been tried. It works badly. But that didn't stop it from being tried.)
My question is when have this been tried? What was the outcome? How did it go badly? It bothering me that google isn't helping me find the answer to this question regarding on the cases when someone tried to make a half a bit and use(?) it.
Thank if you can find out when this case happened.

Yes. That's what arithmetic coding (a type of compression) is about. It allows information to be stored in fractional bits.
I believe that in the specific example you're talking about, that the author was merely being tongue in cheek, and not referring to any actual attempt to split bits.

A bit, as defined by present day computers, is a binary value 0 or 1. That is the 'atom' of information, because in binary logic you cannot represent anything other than that using a single 'bit' - to represent anything else, like 0.5, you need more 'bits'.
However for multilevel electronics, the 'bit', would have multiple values. If someone makes a computer, which has electronics where each 'bit' can take value between 0-9, then you have a bit that can store more than just 0/1. Perhaps the author meant this. Attempts to make computers with multi level bits have failed, 'miserably'. Electronics has not been able to figure out how to do that, in a reliable/cost effective fashion. e.g. if someone can figure that out, then say a 1024 bits memory would have a single cell, the cells taking on a value ranging from 0-1023 to signify the value. That chip would then by 1024 times smaller than the current chips (just theoretically - if everything else remains the constant).
Though admittedly at a physical level, a bit would still remain as a bit. That is 1 wire going into a chip. That is 1 gate input. That is 1 memory cell. If you divide that 1 wire, 1 input, or that one cell into two, you get two wires/inputs/cells, NOT half wire/input/cell. So you get two bits.

I believe the author tries to state a metaphysical fact with humour.
Data is commonly stored using multilevel voltages in magnetic discs and flash memory. However one can calculate the "optimal" base of a number system being 'e=exp(1)=~2.718...', which AFAIK hasn't been "tried", while ternary (base-3) system is quite common in fast parallel arithmetic algorithms and it works better than base-2 in many applications.
Also, as omnifarious states, arithmetic/range encoding can be seen as a method of using fractional bits: e.g. if there are only three possible messages (e.g. 001, 010, 100), those can be stored in two bits "leaving one quarter of the space" unused.

Related

Domain-specific languages for solving algebra problems?

Udacity is offering a lockdown deal for nanodegrees. I'm trying to figure out which of their two pricing plans is more worthwhile depending on how long it takes to complete the course. I'll present the math problem in its entirety here, but if you'd like to skip to the software part, feel free to jump down to the bold section below.
You can pay $226/month for three months, all upfront, or pay $399/month for as long as needed starting with the second month.
Algebraically, this looks like the following, with the first deal on the left and the second on the right, with m representing current time in months. The floor function is used to account for the first month being free, and future months being paid by the month (i.e. 1 month and 1 day is the same price as 2 months).
P.S. Please add MathJax to Stack Overflow. Making this and the following LaTeX images was much harder than it needs to be.
We can solve this particular example fairly trivially by multiplying out the left side, and then dividing by the $399 on the right.
Finally, we can see that this is only true with the floored t when the unfloored t is past the floor minimum (floor floor? hehe).
So if the course takes at least two months to complete, it is worth going with the $226/month deal. Otherwise, one is best off taking the month-by-month. Now this example is small and easy, but this got me thinking about the process here, and there really should be tools that do this for you. I'm sure plenty of industries have need for mechanisms for solving these types of problems with domain-specific languages. Do such languages exist?

Radio Frequency Triangulation (Positioning) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'd like to know if it's possible to somehow triangulate (or otherwise) get a location of a moving object inside a defined area, let's say, 200m x 200m, by using radio waves.
I've been looking at some transceivers and the range shouldn't be impossible (budget doesn't really matter). What would i need? Is there some reading material out there about this?
What i thought about was having a few "Antennas" positioned around the defined area, listening for the RF signal from the moving object - and then somehow calculate the distance from the object to the antenna, and that way get the exact location of the object by combining the data from all antennas.
Is this somehow possible, anyone care to point me in the right direction?
Thanks a lot guys.
Edit: Forgot to mention that the accuracy wouldn't have to be so precise, maybe ~15cm?
Phased antenna arrays are used for beamforming: sending a signal in a certain direction and estimating direction of arrival.
DOA and a several antenna arrays could be used for Localization, which is what you are looking for. This source explains that 2D localization can be performed with 3 receivers using only the TDOA information.
I'm not sure if its practical or applicable to the problem you want to solve, just an avenue for investigation.
well, similar question was posted on this link, you can definitely give it a shot... https://electronics.stackexchange.com/questions/8690/signal-triangulation
Personally I think, if your target area is really like within 200m X 200m you can take a look at RFID based solutions. Passive RFID systems utilize something called a Received Signal Strength Indicator (RSSI) to determine how close an object is to an RFID reader. RSSI can't tell you the exact range, but you can surely find out if it is getting near or far. I have seen RFID systems being used to identify the loading of trucks in a given area roughly the same size as your requirement.
The only caution is if you are using multiple tags on an object for the directivity of target then RFID wont be so accurate as the RSSI level from different tags wont give a conclusive result.
Phased array system is highly accurate, but its a tad costly to implement.
You can find some reference documents in this article. It has good collection of RF ranging and Direction Finding manuals.
There's quite a lot of academic papers out there on this, e.g. scholar search and products, e.g. ekahau. The simplest to have a go with is probably trilateration with hardware that reports an RSSI, which you use to infer distance. Time difference of signal arrival is another level of accurate and tricky.
A lot of these techniques are quite sensitive to the environment: 15cm accuracy in open space, with sufficient receivers, is doable. If you add walls, furniture and people it gets harder. Then you need to survey the site for what a beacon in that location looks like; add in the variation depending on where a user with the device is (big bags of water block radio); and then interpolate between places.
You have an arduino tag on your question. I'm not sure what the significance of this is, but as above - do check what data you can get from your hardware.
I am not not sure about RF & Antennas, BUT, having multiple cameras ( whose relative position is known ) looking at the same object this can be achieved using structure from motion

What is a address space of process? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Please any body tell what is meant by address space ?
why it is called like that ?
and also about the virtual memory ?
Thanks in advance
Regards
Pavankumar
I think address space refers to a segment.
In real mode (intel's XT and 286) segment is just a way to make a program independent of it's space in memory. When a program gets compiled the addresses (of varables, labels - functions) are hardcoded into a program. - This way it would be difficult to load two programs at the same time, because they would all want to use the same addresses.
We need to use relative addresses instead of absolute ones. The resolution between the relative and physical addresses are made relative to segments. If one program is loaded starting from the segment 0x200 and another program is loaded starting from 0x600 they can freely use the same address (for example 0x41) because that will be relative to their respective segments. In our case (real mode) the segment 0x200 will be translated to physical address 0x2000 (through multiplying it by 0x10) and after adding the relative address, the resulting physical address will be 0x2041.
There are many segments which can be used. Data operations by default are made relative to the program's Data Segment (held in the DS register of the cpu) and code operations are made relative to Code Sement (held in the CS register). Stack addresses are resolved to physical addresses using the Stack Segment (SS register).
But in real mode you can freely use the segments, you can access other program's segments or enter arbitrary values which will be resolved to arbitrary physical addresses.
In protected mode the whole concept changed. Segments do not hold addresses any more. They hold selectors. They only refer to an element in a table, where the real base addresses are held. The table also contains limits, so you can no longer address ANY physical address, only inside the portion of memory which was given to your program by the OS. This introduces the concept of ownership of memory blocks by processes.
I think this is enough for the start, feel free to read more on either Wikipedia or other good sources. The topic is quite documented.

What's a good approach to sound convincing when talking about software design/engineering [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I've had a few instances where I've been asked about my approach to software design both in job interviews and in less formal settings. Lots of buzz words invariably come up: waterfall model, agile development, design patterns, UML, test driven development, requirements documents, user acceptance testing etc. etc. ad infintium.
My answer is invariably that the best approach depends on the project at hand. Using the waterfall model with a design specification document employing UML diagrams for a 3 page brochureware site is probably overkill. Likewise jumping straight to hammering out code on a life support control system is not a good idea.
In a short space of time the questioner will start to get this supicious look in their eye as they start thinking "he won't give a straight answer because he doesn't understand the concepts, must be a cowboy". I've found it's better to stick to just talking about "formal" software engineering processes along the lines of: Always use the waterfall model (call it SLC for extra points), gather a 50 page requirements document, turn it into a 100 page specification document with heavy use of UML and design patterns, hammer away at code for 6 months...
So my quandry is what approach should I use to sound convincing? Talk about my experiences on different projects or regurgitate Sommerville?
Be confident and do not fart.
"How to sound convincing?" isn't really a software question, and just because you've added "when talking about software" doesn't make it so. You could make the topic anything and the answer is essentially the same.
You didn't ask, "When is a waterfall approach preferred to an agile approach?" or anything specifically related to software. (Although I'm sure that question has been asked previously).
This question should be closed. But I'll answer anyway, because I think it's interesting.
First, you don't want to "sound convincing." You want to BE convincing.
The best way to be convincing is to be confident. A confident person is persuasive.
Confidence is communicated in a number of ways, many of them non-verbal. Confidence or a lack of confidence is silently inferred by observers based on the speaker's
Eye contact. This is #1. lack of eye contact, looking around the room when speaking = bad. Steady eye contact (without a charlie manson "LOCK ON" effect) = good.
physical comportment. Neat clothing, sitting up straight, hands calm = good. Slouching, messy appearance = bad.
body language. Turned away, folded arms = bad, defensive. Directly facing partner, arms open and relaxed = good, non-threatened and non-threatening.
tone, volume, and rate of speech. Calm and measured, with good forceful volume = good. Rushed, staccato = not as good. Too Loud = not good. Too quiet = not good.
appropriate formality. "dude, this one project I was on was sooo friggin radical." = bad. "I can speak from my professional experiences...." = good
empathic attractiveness. If they like you, they will believe you. That means use the person's name, immediately after learning it. "Hello, John" is better than "Hello". Let them see your hands. Remain positive and constructive.
willingness to engage. A direct response = good. Diversionary tactics = bad. No one diverts on purpose, but you will do so reflexively if you lack confidence in yourself around a particular topic.
All this comes from practice. People judge you, implicitly and instinctively, within 7 seconds of first meeting you. Therefore it's important to smile, deliver a firm handshake (but not too firm), introduce yourself in a professional manner, and exchange pleasantries in the very first moment you meet a potential client, employer, or boss.
The last thing that is super important to confidence is competence. You must be comfortable with your own competence. You must have a solid belief in your own competence in order to project that to others. If you are in doubt of your own abilities, that will be communicated, in some way, no matter how hard you try not to do so.
If you lack confidence in this or that particular area of questioning, read up on it, discuss it, understand it better. And then, you will gain confidence.
It is vital to establish from the start that you know what you're talking about. So you need to open with "I have worked on small projects and big projects, I have used waterfall and agile approaches, and " - the killer phrase - " in my professional opinion it is important to suit the weight of the methodology to the scale of the project in question." Which gives you a springboard for your "on the one hand, on the other hand" routine.
The other thing to make clear is that the choice of a specific methodology is less important than choosing one and sticking with it. Also that in waterfall, just as much as in agile, people, delivering code, collaboration and responding to change are the keys to success.
That "suspicious look in their eye" is either because:
they don't know what they are talking about
you don't know what you are talking about
We don't know which one it is, so it is hard to give advice. Make sure that:
you don't talk around in circles and end up confusing everyone
you don't talk too much (this will lose you an interview real quick)
explain the concepts in a simple straight forward way
try to use actual examples that you've worked with when talking about these concepts. Talking about a whole range of projects like life support systems and brochure web sites would show that your knowledge is purely academic.
don't try to know everything, even if you do.

What techniques can you use to encode data on a lossy one-way channel?

Imagine you have a channel of communication that is inherently lossy and one-way. That is, there is some inherent noise that is impossible to remove that causes, say, random bits to be toggled. Also imagine that it is one way - you cannot request retransmission.
But you need to send data over it regardless. What techniques can you use to send numbers and text over that channel?
Is it possible to encode numbers so that even with random bit twiddling they can still be interpreted as values close to the original (lossy transmittion)?
Is there a way to send a string of characters (ASCII, say) in a lossless fashion?
This is just for fun. I know you can use morse code or any very low frequency binary communication. I know about parity bits and checksums to detect errors and retrying. I know that you might as well use an analog signal. I'm just curious if there are any interesting computer-sciency techniques to send this stuff over a lossy channel.
Depending on some details that you don't supply about your lossy channel, I would recommend, first using a Gray code to ensure that single-bit errors result in small differences (to cover your desire for loss mitigation in lossy transmission), and then possibly also encoding the resulting stream with some "lossless" (==tries to be loss-less;-) encoding.
Reed-Solomon and variants thereof are particularly good if your noise episodes are prone to occur in small bursts (several bit mistakes within, say, a single byte), which should interoperate well with Gray coding (since multi-bit mistakes are the killers for the "loss mitigation" aspect of Gray, designed to degrade gracefully for single-bit errors on the wire). That's because R-S is intrinsically a block scheme, and multiple errors within one block are basically the same as a single error in it, from R-S's point of view;-).
R-S is particularly awesome if many of the errors are erasures -- to put it simply, an erasure is a symbol that has most probably been mangled in transmission, BUT for which you DO know the crucial fact that it HAS been mangled. The physical layer, depending on how it's designed, can often have hints about that fact, and if there's a way for it to inform the higher layers, that can be of crucial help. Let me explain erasures a bit...:
Say for a simplified example that a 0 is sent as a level of -1 volt and a 1 is send as a level of +1 volt (wrt some reference wave), but there's noise (physical noise can often be well-modeled, ask any competent communication engineer;-); depending on the noise model the decoding might be that anything -0.7 V and down is considered a 0 bit, anything +0.7 V and up is considered a 1 bit, anything in-between is considered an erasure, i.e., the higher layer is told that the bit in question was probably mangled in transmission and should therefore be disregarded. (I sometimes give this as one example of my thesis that sometimes abstractions SHOULD "leak" -- in a controlled and architected way: the Martelli corollary to Spolsky's Law of Leaky Abstractions!-).
A R-S code with any given redundancy ratio can be about twice as effective at correcting erasures (errors the decoder is told about) as it can be at correcting otherwise-unknown errors -- it's also possible to mix both aspects, correcting both some erasures AND some otherwise-unknown errors.
As the cherry on top, custom R-S codes can be (reasonably easily) designed and tailored to reduce the probability of uncorrected errors to below any required threshold θ given a precise model of the physical channel's characteristics in terms of both erasures and undetected errors (including both probability and burstiness).
I wouldn't call this whole area a "computer-sciency" one, actually: back when I graduated (MSEE, 30 years ago), I was mostly trying to avoid "CS" stuff in favor of chip design, system design, advanced radio systems, &c -- yet I was taught this stuff (well, the subset that was already within the realm of practical engineering use;-) pretty well.
And, just to confirm that things haven't changed all that much in one generation: my daughter just got her MS in telecom engineering (strictly focusing on advanced radio systems) -- she can't design just about any serious program, algorithm, or data structure (though she did just fine in the mandatory courses on C and Java, there was absolutely no CS depth in those courses, nor elsewhere in her curriculum -- her daily working language is matlab...!-) -- yet she knows more about information and coding theory than I ever learned, and that's before any PhD level study (she's staying for her PhD, but that hasn't yet begun).
So, I claim these fields are more EE-y than CS-y (though of course the boundaries are ever fuzzy -- witness the fact that after a few years designing chips I ended up as a SW guy more or less by accident, and so did a lot of my contemporaries;-).
This question is the subject of coding theory.
Probably one of the better-known methods is to use Hamming code. It might not be the best way of correcting errors on large scales, but it's incredibly simple to understand.
There is the redundant encoding used in optical media that can recover bit-loss.
ECC is also used in hard-disks and RAM
The TCP protocol can handle quite a lot of data loss with retransmissions.
Either Turbo Codes or Low-density parity-checking codes for general data, because these come closest to approaching the Shannon limit - see wikipedia.
You can use Reed-Solomon codes.
See also the Sliding Window Protocol (which is used by TCP).
Although this includes dealing with packets being re-ordered or lost altogether, which was not part of your problem definition.
As Alex Martelli says, there's lots of coding theory in the world, but Reed-Solomon codes are definitely a sweet spot. If you actually want to build something, Jim Plank has written a nice tutorial on Reed-Solomon coding. Plank has a professional interest in coding with a lot of practical expertise to back it up.
I would go for some of these suggestions, followed by multiple sendings of the same data. So that way you can hope for different errors to be introduced at different points in the stream, and you may be able to infer the desired number a lot easier.

Resources