How is my simple latch diagram wrong and why is flip-flop latches correct? - logical-operators

I'm having a hard time understanding the way flip-flops actually flip states and wondering why is it such a design commonly used, when simpler design could suffice, from my current opinion.
I'm hoping that after showing you my version of a latch diagram, someone could point out the flaws and that may help me understand why a flip-flop latch is better.
I was reading a book and bumped into some "general" form of latch:
https://i.imgur.com/nkldf4u.png (sorry, I don't have the reputation insert images)
I've been on it for about 2 hours trying to truly grasp the mechanism. Seeing that I can't do it, I've draw my version of a latch:
https://i.imgur.com/fFgpNzR.png
The blue diagram, the one from the book, is harder to follow because some gates will switch 2 times when the inputs switches once, because as the output is tunneled back as input to the same gate, the output may change base on its previous value.
My version of the diagram, the one in black, uses a more programmable approach. I take the current state C and decide if it differs from the input state and output it into A. I use A value in an AND gate with the enable wire to decide if both criteria is met and put it in B. Finally, I'm using a XOR to change the state and output as C.
I'm hoping someone can tell me why is this bad, what I haven't taken into consideration or why a more complex mechanism is needed.
Thank you in anticipation.

As far as I can tell, your latch implementation should work.
However, there is more to low-level digital design than just gate count. In actual circuits, not all gates are created equal as the actual implementation of these gates can make some more "costly" than others (usually measured in area/transistor count and complexity in routing). For typical CMOS implementations, NAND gates are really cheap (only 4 transistors for two input NAND) so alot of primitives use NAND (or NOR) as a building block for more complex designs. XOR is generally a more complicated gate to implement, most CMOS implementations Ive seen use 8 transistors. Without going through and optimizing your design, it might take at least 20 or more transistors to implement while the latch design from the textbook only takes 16 (A 20%+ savings in area per bit stored, which is quite significant). There is alot more at play here than just transistor count as well; things like transistor sizing, routing and trace sizing, power considerations and glitch protection when actually going through and implementing designs, so even this simple analysis is incomplete and might be missing reasons for the textbook implementation vs yours (or vice versa).
Asynchronous sequential logic (which is what latch/flipflop implementations are) can be difficult to understand which is why most circuits use higher-level constructs and treat these details as black boxes (and it also creates a nice abstraction where the actual implementation doesnt matter so long as the properties of that element are preserved).

Related

Why do I have more flash (STM32F103RCT6) than what my datasheet says?

I'm writing a firmware to a STM32F103RCT6 microcontroller that has a flash of 256KB according to the datasheet.
Because of a mistake of mine, I was writing some data at 0x0807F800 that according to the reference manual is the last page of a high density device. (The ref. manual make no distinction of different sizes of 'high density devices' on the memory layout)
The data that I wrote, was being read with no errors, so I did some tests and read/wrote 512KB of random data and compared the files and they matched!
files hash pic
I did some research I couldn't find similar experiences.
Are those extra flash reliable? Is that some kind of industrial maneuver?
I would not recommend using this extra FLASH memory for anything that matters.
It is not guaranteed to be present on other chips with the same part number. If used in a product that would be a major problem. Even if a sample is successful now, the manufacturer could change the design or processes in the future and take it away.
While it might be perfectly fine on your chip, it could also be prone to corruption if there are weak memory cells.
A common practice in the semiconductor industry is to have several parts that share a common die design. After manufacturing, the dies are tested and sorted. A die might have a defect in a peripheral, so is used as a part that doesn't have that peripheral. Alternatively, it might be perfectly good, but used as a lesser part for business reasons (i.e., supply and demand).
Often, the unused features are disabled by cutting traces, burning fuses, or special programming at the factory, but it's possible extra features might left intact if there are no negative effects and are unlikely to be observed.
If this is only for one-off use or experimentation, and corruption is an acceptable condition, I don't really see a harm in using it.

Specification for a Functional Reactive Programming language

I am looking at messing around with creating a functional reactive framework at some point. I have read quite a lot about it and seen a few examples but I wanted to get a clear idea of what this framework would HAVE to do to be considered an FRP extension/dsl. I'm not really concerned with implementation problems or specifics etc but more as to what would be desired in a perfect world situation.
What would be the key operations and qualities of an ideal functional reactive programming language?
I'm glad you're starting by asking about a specification rather than implementation first.
There are a lot of ideas floating around about what FRP is.
From the very start in the early 90's (when I was working in interactive graphics at Sun Microsystems and then Microsoft Research), it has been about two properties (a) denotative and (b) temporally continuous.
Many folks drop both of these properties and identify FRP with various implementation notions, all of which are beside the point in my perspective.
To reduce confusion, I would like to see the term "functional reactive programming" replaced by the more accurate & descriptive "denotative, continuous-time programming" (DCTP), as suggested by Jake McArthur in a conversation last year.
By "denotative", I mean founded on a precise, simple, implementation-independent, compositional semantics that exactly specifies the meaning of each type and building block.
The compositional nature of the semantics then determines the meaning of all type-correct combinations of the building blocks.
For me, denotative is the heart & essence of functional programming, and is what enables precise & tractable reasoning and thus a foundation for correctness, derivation, and optimization.
Peter Landin recommended "denotative" as a substantive replacement to the fuzzier term "functional" and a way to distinguish deeply/genuinely functional programming from merely functional-looking notations.
See this comment for some Landin quotes and a paper reference.
About continuous time, see the post Why program with continuous time? and my quote in AshleyF's answer on this page.
I'm surprised over & over by hearing the claim that the idea of continuous time is somehow unnatural or impossible to implement, considering the discrete nature of computers.
This line of thinking strikes me as bizarre, especially when coming from Haskellers, for a few reasons:
Using lazy functional languages, we casually program with infinite data on finite machines. We get lovely modularity as a result, as illustrated in John Hughes's classic paper Why Functional Programming Matters.
There are many examples of programming in continuous space, for instance, vector graphics, but also things like Pan.
I like my programs to reflect how I think about the problem space rather than the machine that executes the programs, and I tend to expect other high-level language programmers to share that preference.
("A programming language is low level when its programs require attention to the irrelevant." - Alan Perlis)
I've been making libraries for programming with continuous time since TBAG and ActiveVRML (the first DCTP/FRP system) and later Fran.
It's easy to implement correctly.
A few different approaches are described in the paper Functional Implementations of Continuous Modeled Animation.
Implementing continuous time efficiently (and still correctly!) is another matter, especially avoidance of recomputing unchanging values.
(See the paper Push-pull functional reactive programming.)
For related remarks, please see my answer to The difference between Reactive and Functional-Reactive programming and to What is (functional) reactive programming? Update: For more on why continuous time matters, see these notes. Update: See also, my 2015 talk The essence and origins of FRP (and the related talks linked there).
Good luck with your exploration, and please let me know if you have any questions.
My contact info is on my home page.
I assume you've probably seen Matthias Felleisen’s talk on Functional I/O and read his paper. I think his is a very pragmatic and beautiful approach. Hopefully you've also stumbled onto some of Conal Elliott's excellent work.
My personal requirements would be that the system is completely pure. That is, all behavior is defined by pure world->world functions and all realization or visualization is defined by world->visual functions; where visual is some static description of the output from the system.
My other primary feature would be a historical debugger. It should be relatively trivial to maintain a history of world states and be able to replay from any point in time.
One area of extremely interesting research (I believe an unsolved problem) would be to use continuous time rather than iterating the world->world functions upon some discrete clock ticks. I once did a few blog posts on FRP and Conal Elliott left the following thought provoking comment:
I like denotative/functional
approaches, for composability &
semantic clarity. For the same
reasons, I prefer continuous time &
space over discrete time & space. In
all of these cases, the less
machine-like formulation nicely
separates the what from the how of its
machine-based presentation.
Solve that and you'll be a hero!
Well, unless by perfect world you mean telepathic computers (yikes!), then you'll require some way to process user I/O - I'll assume something like orthogonal persistence has subsumed the more boring file I/O...
Let's start with input...because it already has one solution. From page 4 of 11 in Conal Elliott's and Paul Hudak's pioneering paper Functional Reactive Animation:
lbp; rbp : Time → Event Event ( )
which, in Haskell, would look something like:
-- read left and right mouse button-press events
lbp, rbp :: Time -> Event (Event ())
So for input from the keyboard:
kbd :: Time -> Event Char.
Other inputs can be dealt with in similar fashion.
So...what about output? The actual word doesn't appear anywhere in the paper (neither does "I/O" for that matter) - we'll have to figure this one out ourselves. But this time, it's our Haskell translation:
lbp, rbp :: Time -> Event (Event ())
providing the hint - Event () - the unit-event. That can serve as the result of sending a Char off, to appear somewhere on your screen:
viewChar :: Char -> Time -> Event ()
Again, other outputs can be dealt with using similar techniques.
...what's that - it isn't denotative?
Because viewChar is...what - impure?
If so, that means lbp and rbp are also impure - are you sure about this?
Alright...let's have a type for taking in a series of those mouse button-press, or other events:
type Intake a = [a]
lpb, rbp :: Intake (Event (Event ())
Is that any better? Good! Well, sort of -
what happens if the mouse is unplugged? That
could put parts of a program into a spin waiting for input (and using [] would permanently end the series - no more button presses!).
We need to change Intake:
data Intake a = None (Intake a) | Next a (Intake a)
Now unplugging the mouse results in None …
appearing, which a program can detect and react accordingly e.g. yielding its OS thread,
suspending itself, etc.
So, what about output? Well, output devices can often be unplugged too. Taking a hint from Intake:
data Outlet a = Wait (Outlet a) | Went (… (Outlet a) …)
It's similar to unplugging an input device - upon encounteringWait …, a program can pause transmission.
So what should the type of Went be? Well, an Outlet accepts values incrementally to allow Wait … to appear if needed - the accepting of each value should present us with the rest of the Output. Therefore:
data Outlet a = Wait (Outlet a) | Went (a -> Outlet a)
Bringing that altogether:
data Intake a = None (Intake a) | Next a (Intake a)
lbp, rbp :: Intake (Event (Event ())
data Outlet a = Wait (Outlet a) | Went (a -> Outlet a)
viewChar :: Outlet Char
So is all this valid? If you're not sure, see section 20.4.2 (page 86 of 263) of Fudgets - Purely Functional Processes with applications to Graphical User Interfaces by Magnus Carlsson and Thomas Hallgren - if Intake and Outlet look dubious then so is what can be seen there, in the paper...

How to identify that code is over abstracted?

What should be the measures that should be used to identify that code is over abstracted and very hard to understand and what should be done to reduce over abstraction?
"Simplicity over complexity, complexity over complicatedness"
So - there's a benefit to abstract something only if You are "de-leveling" complicatedness to complexity. Reasons to do that can vary: better modularity, better encapsulation etc.
Identifying over abstraction is a chicken and egg problem. In order to reduce over abstraction You need to understand actual reason behind code lines. That includes understanding idea of particular abstraction itself (in contrast to calling it over abstracted cause of lack of understanding). And that's not enough - You need to know a better, simpler solution to prove that it's over abstracted.
If You are looking for tool that could do it in Your place - look no more, only mind can reliably judge that.
I will give an answer that will get a LOT of down votes!
If the code is written in an OO language .. it is necessarily heavily over-abstracted. The purer the language the worse the problem.
Abstraction should be used with great caution. If in doubt always use concrete data structures. (You can always abstract later, this is easier than de-abstraction :)
You must be very certain you have the right abstraction in your current context, and you must be very sure that concept will stand the test of change. Abstraction has a high price in performance of both the code and the coder.
Some weak tests for over-abstraction: if the data structure is a product type (struct in C) and the programmer has written get and set method for each field, they have utterly failed to provide any real abstraction, disabled operators like C increment, for no purpose, and simply not understood that the struct field names are already the abstract representation of a product. Duplicating and laming up the interface is not a good idea.
A good test for the product case is whether there exist any data invariants to maintain. For example a pair of integers representing a rational number is almost sufficient, there's little need for any abstraction because all pairs are valid except when the denominator is zero. However for performance reasons one may choose to maintain an invariant, typically the denominator is required to be greater than zero, and the numerator and denominator are relatively prime. To ensure the invariant, the product representation is encapsulated: the initial value protected by a constructor and methods constrained to maintain the invariant.
To fix code I recommend these steps:
Document the representation invariants the abstraction is maintaining
Remove the abstraction (methods) if you can't find strong invariants
Rewrite code using the method to access the data directly.
This procedure only works for low level abstraction, i.e. abstraction of small values by classes.
Over abstraction at a higher level is much harder to deal with. Ideally you'd refactor the code repeatedly, checking to see after each step it continues to work. However this will be hard, and sometimes a major rewrite is required, rather than a refinement. It's probably not worth it unless the abstraction is so far off base it is not tenable to continue to maintain it.
Download Magento and have a look at the code, read some documents on it and have a look at their ERD: http://www.magentocommerce.com/wiki/_media/doc/magento---sample_database_diagram.png?cache=cache
I'm not joking, this is over-abstraction.. trying to please everyone and cover every base is a terrible idea and makes life extremely difficult for everyone.
Personally I would say that "What is the ideal level of abstraction?" is a subjective question.
I don't like code that uses a new line for every atomic operation, but I also don't like 10 nested operations within one line.
I like the use of recursive functions, but I don't appreciate recursion for the sole sake of recursion.
I like generics, but I don't like (nested) generic functions that e.g. use different code for each specific type that's expected...
It is a matter of personal opinion as well as common sense. Does this answer your question?
I completely agree with what #ArnisLapsa wrote:
"Simplicity over complexity, complexity over complicatedness"
And that
an abstraction is used to "de-level" those, from complicated to complex
(and from complex to simpler)
Also, as stated by #MartinHemmings a good abstraction is quite subjective because we don't all think the same way. And actually our way of thinking change with time. So Something that someone find simple might looks complex to others, and even become simpler with more experiences. Eg. A monadic operation is something trivial for functional programmer, but can be seriously confusing for others. Similarly, a design with mutable object communicating with each other can be natural for some and feel un-trackable for others.
That being said, I would like to add a couple of indicators. Note that this applies to abstractions used in code-base, not "paradigm abstraction" such as everything-is-a-function, or everything-is-designed-as-objects. So:
To the people it concerns, the abstraction should be conceptually simpler than other alternatives, without looking at the implementation. If you find that thinking of all possible cases is simpler that reasoning using the abstraction, then this abstraction is not suitable (for you)
Its implementation should reason only about the abstraction, not the specific cases that it will be used for. As soon as the abstraction implementation has parts made for specific cases, it indicates an "unfit" abstraction. And increasing generalization to cope with each new case, is going the wrong way (and tends to fall to the next issue).
A very common indicator of over-abstraction I have found (and actually fell for) are abstractions that represent more than what is needed, now. As much as possible, they should allow to do exactly what is required, but nothing more. For example, say you're thinking of, or already have, a "2d point" abstraction for which you can define many operators you need. Then you have another need that could really be a "4d point" similar to the 2d. Don't start to use a "Ndimensionnal point" abstraction, especially thinking that you might later need it. Maybe you'll never have anything else than 2 and 4d (because it stays as "a good idea" in the backlog forever) but instead some requirements pops to convert 4d points into pairs of 2d points. That's going to be hard to generalize to n-dimensions. So, each abstraction can be checked to cover and only cover the actual needs. In my point example, the complexity "n-dimensional" is actually only used to cope with the 2 and 4d cases (and the 4d might not even be used that much).
Finally, in a more global point of view, a code-base that has many not related abstractions, is an indicator that the dev team tends to abstract every little issues. So probably many of them are or became over-abstracted.

Immutable game object, basic functional programming question

I'm in the process of trying to 'learn more of' and 'learn lessons from' functional programming and the idea of immutability being good for concurrency, etc.
As a thought exercise I imagined a simple game where Mario-esq type character can run and jump around with enemies that shoot at him...
Then I tried to imagine this being written functionally using immutable objects.
This raised some questions that puzzled me (being an Imperative OO programmer).
1) If my little guy at position x10,y100 moves right 1 unit do I just re-instantiate him using his old values with a +1 to his x position (e.g x11,y100)?
2) (If my first assumption is correct)
If my input thread moves little guy right 1 unit and my enemy AI thread shoots little guy and enemy-ai-thread resolves before input-thread then my guy will loose health, then upon input thread resolving, gain it back and move right ...
Does this mean I can't fire-&-forget my threads even with immutability?
Do I need to send my threads off to do their thing then new()up little guy synchronously when I have the results of both threaded operations? or is there a simple 'functional' solution?
This is a slightly different threading problem than I face on a day to day basis.
Usually I have to decide if I care about what order threads resolve in or not. Where as in the above case I technically don't care if he takes damage or moves first. But I do care if race conditions during instantiation cause one threads data to be totally lost.
3) (Again if my first assumption is correct) Does constantly instantiating new instances of an object (e.g Mario guy) have a horrible overhead that makes it a very serious/important design decision ?
EDIT
Sorry for this additional edit, I wasn't what good practice is on here about follow up questions...
4) If immutability is something I should strive for and even jump though hoops of instantiating new versions of objects that have changed...And If I instantiate my guy every time he moves (only with a different position) don't I have exactly the same problems as I would if he was mutable? in as much that something that referenced him at one point in time is actually looking at old values?.. The more I dig into this the more my head's spinning as generating new versions of the same thing with differing values just seems like mutability, via hack. :¬?
I guess my question is: How should this work? and how is it beneficial over just mutating his position?
for(ever)//simplified game-loop update or "tick" method
{
if(Keyboard.IsDown(Key.Right)
guy = new Guy(guy){location = new Point(guy.Location.x +1, guy.Location.y)};
}
Also confusing is: The above code means that guy is mutable!(even if his properties are not)
4.5) Is that at all possible with a totally immutable guy?
Thanks,
J.
A couple comments on your points:
1) Yes, maybe. To reduce overhead, a practical design will probably end up sharing a lot of state between these instances. For example, perhaps your little guy has an "Equipment" structure which is also immutable. The new copy and the old copy can reference the same "equipment" structure safely, since it's immutable; so you only have to copy a reference, not the whole thing. This is an common advantage you only get thanks to immutability -- if "equipment" was mutable, you couldn't share the reference, since if it changed, your "old" version would change too.
2) In a game, the most practical solution to this issue would probably be to have a global "clock" and have this sort of processing happen once, at a clock tick. Note that your exact scenario would still be a problem if you didn't write it in a functional style: Suppose H0 is the health at time T. If you passed H0 to a function which made a decision about health at time T, you took damage at time T+1, and then the function returned at time T+5, it might have made the wrong decision based on your current health.
3) In a language that encourages functional programming, object instantiation is often made as cheap as possible. I know that on the JVM, creating small objects on the heap is so fast that it's rarely a performance consideration in any practical situation at all, and in C# I've never encountered a situation where it was a concern either.
If my little guy at position
x10,y100 moves right 1 unit do I just
re-instantiate him using his old
values with a +1 to his x position
(e.g x11,y100)?
Well, not necessarily. You could instantiate the guy once, and change its position during play. You may model this with agents. The guy is an agent, so is the AI, so is the render thread, so is the user.
When the AI shoots the guy, it sends it a message, when the user presses an arrow key that sends another message and so on.
let guyAgent (guy, position, health) =
let messages = receiveMessages()
let (newPosition, newHealth) = process(messages)
sendMessage(renderer, (guy, newPosition, newHealth))
guyAgent (guy, newPosition, newHealth)
"Everything" is immutable now (actually, under the hood the agent's dipatch queue does have some mutable state probably).
If immutability is something I
should strive for and even jump though
hoops of instantiating new versions of
objects that have changed...And If I
instantiate my guy every time he moves
(only with a different position) don't
I have exactly the same problems as I
would if he was mutable?
Well, yes. Looping with mutable values and recurring with immutable ones is equivalent.
Edit:
For agents, the wiki is always helpful.
Luca Bolognese has an F# implementation of agents.
This book (called by some The Intelligent Agent Book), though targeting the AI applications (instead of having a SW engineering point of view) is excellent.
If everything in the global system state, outside the current stack frame, is immutable, unless gives another thread a reference to something on the stack (VERY DANGEROUS) there won't be any way for a threads to do anything to affect each other. You could fire and forget, or simply not bother firing in the first place, and the effect would be the same.
Assuming there are some parts of the global state that are mutable, one useful pattern is:
Do
Latch a mutable reference to an immutable object
Generate a new object based upon the latched reference
Loop While CompareExchange fails.
The compare exchange should update the mutable reference to the new one if it still points to old one. This avoids the overhead of locking if there is no concurrent access, but may perform worse than locking if many threads are try to update the same object and generating a new instance from the latched one is slow. One advantage of this approach is that there is no danger of deadlock, though in some situations liveLock could occur.
Another functional approach to this sort of problem is to take a step back and separate out the idea of state from the idea of your little guy.
Your state will include your little guy's position, as well as the position of your baddy and it's shot, and then you have some functions that take some or all of the state and do things like generating the next state and drawing the screen.
The timing issues you're talking about when things you want to parallelize depend on each other are real problems that won't magically go away, although the solutions may be more or less convenient in different languages.
Several suggestions have already been made, and there are a variety of concurrency solutions. The central clock and agents would work, as would Software Transactional Memory, Mutexes or CSP (go style channels), and probably others. The best approach is going to depend on the specifics of the problem, and to a certain extent on personal taste.
As for the head-spinning, try not to get too caught up in whether a thing is changing or not. The point of immutability is not that things don't change, it's that you can create pure functions so that your program is easier to reason about.
For example, an OO program might have a drawing function that iterates over all the objects in a scene, and asks them all to draw themselves, where a functional program might have a function that takes a state and draws a frame.
The end result would be the same scene, but the way the logic and the state is organised is very different.
I, for one, find that it's much easier to work on when you have all the data over here, in one big input lump, and all the drawing logic there, encapsulated in some functions. There are some pretty clear architectural wins too - serialization, testing, and swapping out front ends gets a lot easier with this sort of structure.
Not everything in your program should be immutable. A player's position is something you would expect to be mutable. His name, maybe not.
Immutability is good, but you should perhaps rethink your approach to use more concurrent solutions than simple "immutable"ize everything. Consider this
Thread AI gets copy of your position
You move three units to the left.
AI shoots you based on your old position, and hits... shouldn't happen!
Also, most gaming is done in "game ticks" - there's not much multithreading going on!

What techniques can you use to encode data on a lossy one-way channel?

Imagine you have a channel of communication that is inherently lossy and one-way. That is, there is some inherent noise that is impossible to remove that causes, say, random bits to be toggled. Also imagine that it is one way - you cannot request retransmission.
But you need to send data over it regardless. What techniques can you use to send numbers and text over that channel?
Is it possible to encode numbers so that even with random bit twiddling they can still be interpreted as values close to the original (lossy transmittion)?
Is there a way to send a string of characters (ASCII, say) in a lossless fashion?
This is just for fun. I know you can use morse code or any very low frequency binary communication. I know about parity bits and checksums to detect errors and retrying. I know that you might as well use an analog signal. I'm just curious if there are any interesting computer-sciency techniques to send this stuff over a lossy channel.
Depending on some details that you don't supply about your lossy channel, I would recommend, first using a Gray code to ensure that single-bit errors result in small differences (to cover your desire for loss mitigation in lossy transmission), and then possibly also encoding the resulting stream with some "lossless" (==tries to be loss-less;-) encoding.
Reed-Solomon and variants thereof are particularly good if your noise episodes are prone to occur in small bursts (several bit mistakes within, say, a single byte), which should interoperate well with Gray coding (since multi-bit mistakes are the killers for the "loss mitigation" aspect of Gray, designed to degrade gracefully for single-bit errors on the wire). That's because R-S is intrinsically a block scheme, and multiple errors within one block are basically the same as a single error in it, from R-S's point of view;-).
R-S is particularly awesome if many of the errors are erasures -- to put it simply, an erasure is a symbol that has most probably been mangled in transmission, BUT for which you DO know the crucial fact that it HAS been mangled. The physical layer, depending on how it's designed, can often have hints about that fact, and if there's a way for it to inform the higher layers, that can be of crucial help. Let me explain erasures a bit...:
Say for a simplified example that a 0 is sent as a level of -1 volt and a 1 is send as a level of +1 volt (wrt some reference wave), but there's noise (physical noise can often be well-modeled, ask any competent communication engineer;-); depending on the noise model the decoding might be that anything -0.7 V and down is considered a 0 bit, anything +0.7 V and up is considered a 1 bit, anything in-between is considered an erasure, i.e., the higher layer is told that the bit in question was probably mangled in transmission and should therefore be disregarded. (I sometimes give this as one example of my thesis that sometimes abstractions SHOULD "leak" -- in a controlled and architected way: the Martelli corollary to Spolsky's Law of Leaky Abstractions!-).
A R-S code with any given redundancy ratio can be about twice as effective at correcting erasures (errors the decoder is told about) as it can be at correcting otherwise-unknown errors -- it's also possible to mix both aspects, correcting both some erasures AND some otherwise-unknown errors.
As the cherry on top, custom R-S codes can be (reasonably easily) designed and tailored to reduce the probability of uncorrected errors to below any required threshold θ given a precise model of the physical channel's characteristics in terms of both erasures and undetected errors (including both probability and burstiness).
I wouldn't call this whole area a "computer-sciency" one, actually: back when I graduated (MSEE, 30 years ago), I was mostly trying to avoid "CS" stuff in favor of chip design, system design, advanced radio systems, &c -- yet I was taught this stuff (well, the subset that was already within the realm of practical engineering use;-) pretty well.
And, just to confirm that things haven't changed all that much in one generation: my daughter just got her MS in telecom engineering (strictly focusing on advanced radio systems) -- she can't design just about any serious program, algorithm, or data structure (though she did just fine in the mandatory courses on C and Java, there was absolutely no CS depth in those courses, nor elsewhere in her curriculum -- her daily working language is matlab...!-) -- yet she knows more about information and coding theory than I ever learned, and that's before any PhD level study (she's staying for her PhD, but that hasn't yet begun).
So, I claim these fields are more EE-y than CS-y (though of course the boundaries are ever fuzzy -- witness the fact that after a few years designing chips I ended up as a SW guy more or less by accident, and so did a lot of my contemporaries;-).
This question is the subject of coding theory.
Probably one of the better-known methods is to use Hamming code. It might not be the best way of correcting errors on large scales, but it's incredibly simple to understand.
There is the redundant encoding used in optical media that can recover bit-loss.
ECC is also used in hard-disks and RAM
The TCP protocol can handle quite a lot of data loss with retransmissions.
Either Turbo Codes or Low-density parity-checking codes for general data, because these come closest to approaching the Shannon limit - see wikipedia.
You can use Reed-Solomon codes.
See also the Sliding Window Protocol (which is used by TCP).
Although this includes dealing with packets being re-ordered or lost altogether, which was not part of your problem definition.
As Alex Martelli says, there's lots of coding theory in the world, but Reed-Solomon codes are definitely a sweet spot. If you actually want to build something, Jim Plank has written a nice tutorial on Reed-Solomon coding. Plank has a professional interest in coding with a lot of practical expertise to back it up.
I would go for some of these suggestions, followed by multiple sendings of the same data. So that way you can hope for different errors to be introduced at different points in the stream, and you may be able to infer the desired number a lot easier.

Resources