What is the error correcting capacity of polar decoder of block length N and message width K? - decoding

This is regarding the decoding.
I am expecting the error correcting capacity of polar decoder of(N, K).

Related

Why does GNAT reject an array type with a default discriminant value?

This (declaration) code raises a storage error:
type Vector is array (Integer range <>) of Integer;
type Array_Starting_At_One (Max : Integer := 0) is
record
Mat : Vector (1 .. Max);
end record;
X : Array_Starting_At_One;
I can't figure out why. If I specify the default, as in X : Array_Starting_At_One (Max => 0); the error disappears, although the Array_Starting_At_One type specification still triggers a warning that creation of such objects may raise Storage_Error.
I am not even trying to store a single bit, so this error doesn't make any sense to me:
raised STORAGE_ERROR : System.Memory.Alloc: object too large
When a variable is declared using the default discriminant, then the discriminant can later be changed via an assignment. This means that the compiler (at least GNAT does this) will allocate (on the stack) enough room to hold an object with any discriminant up to the maximum allowed value (Integer'Last in this case).
Either increase your stack size (not neccessarily recommended), or use a different subtype more suited to your problem:
subtype Foo is Integer range 1..10;
type Vector is array (Foo range <>) of Integer;
Certainly any array with an index range of 1..Integer'Last could raise Storage_Error. The compiler is warning you of this possibility. Try restricting the index range for your array type such as:
subtype Indices is Integer range 0..1024;
type Vector is Array (Indices range <>) of Integer;
type Array_Ending_At (Max : Indices := 0) is
record
Mat : Vector(1..Max);
end record;
As you notice, this is a compiler issue. The Janus/Ada compiler would accept your code without complaint or run-time exception.
Your variable X is an unconstrained object; the discriminant (and so the size of the object) can be changed by full-record assignment. The ARM is silent about how such things should be implemented. GNAT has chosen to allocate enough space for the largest possible value; since X is allocated on the stack, there will not be enough space for it with GNAT's default stack size (if you allocate it on the heap, you might not have a problem). Janus instead allocates only enough space for the current value, which may result in X.Mat being implicitly allocated on the heap.
The Janus way is more developer-friendly and acceptable for most applications, where the developer doesn't care where things are allocated, but there are situations where implicit heap allocation cannot be accepted, so the GNAT way is more general.

Reed Solomon error correction for QR-Code

As a QR-Code uses Reed-Solomon for error correction, am I right, that with certain levels of corruption, a QR-Code reader could theoretically return wrong results?
If yes, are there other levels of integrity checks (checksums etc.) that would prevent that?
You can do a web search for "QR Code ISO", to find a pdf version of the document. I found one here:
https://www.swisseduc.ch/informatik/theoretische_informatik/qr_codes/docs/qr_standard.pdf
There are multiple strengths of error correction in the standard, and to avoid mis-correction, in some cases, some of the "parity" bytes are only used for error detection, and not for error correction. This is shown in table 13 in the pdf file linked to above. The ones marked with a "b" are cases where some of the parity bytes are used only for error detection. For example, the very first entry in table 13 shows (26,19,2)b, which means 26 total bytes, 19 data bytes, and 2 byte correction, which means of the 26-19 = 7 parity bytes, 4 are used for correction (each corrected byte requires 2 parity bytes unless hardware can flag "erasures"), and 3 are used for detection only.
If the error correction calculates an invalid location (one that is "outside" the range of valid locations), that will be flagged as a detected error. If the number of unique calculated locations is less than than the number of assumed errors used to calculate those locations (duplicate or non-existing root) that will be flagged as a detected error. For higher levels of error correction, the odds of all the calculated locations being valid for bad data is so small that none of the parity bytes are used for error detection only. These cases don't have the "b" in their table 13 entries.
The choices made for the various levels of error correction result in a very small chance of a bad result, but it's always possible.
are there other levels of integrity checks (checksums etc.) that would prevent that?
A QR-Code reader could flags bytes where any of the bits were not clearly 0 or 1 (like a shade of grey on a black / white code) as potential "erasures", which would lower the odds of a bad result. I don't know if this is done.
When generating a QR-Code, a mask is chosen to even out the ratio of light and dark areas in a code, and after correction, if there is evidence that the wrong mask was chosen, that could be flagged as a detected error, but I'm not sure if the "best" mask is always chosen when a code is printed, so I don't know if a check for "best" mask is used.

how to calculate the loss event rate p in TFRC(tcp friendly rate control) when no packet loss?

When there is packet loss , I know the method to calculate the p (I read in the RFC document).
But when there is no packet loss , how to calculate it? The document show nothing about it.
If the loss event rate p is zero, the denominator of equation in tfrc is 0.
The equation is as follows:
enter image description here
and the document is rfc5348 : https://www.rfc-editor.org/rfc/rfc5348
I know nothing about TFRC so pure guesswork here.
I suppose available bandwidth calculation is based on loss event rate. If no packets were lost up to now you have zero info about available bandwidth. Usually in this case Congestion Avoidance algo increases bitrate until packet drops start to occur.
In other words, if no packet drops so far you can assume your available bandwidth is unlimited, and use max possible value of p's type. This exactly follows from the formula, as division by zero gives you infinity in math.

Incompatible dimensions for vector multiplication (3dfim+ (afni))

I am trying to run 3dfim+ to calculate a correlation map for a region of interest. I de-obliqued the data (the functional and the ROI mask which was converted to the same functional space) with 3drefit -deoblique (because I was getting an error message about data being oblique). However the de-obliqued functional image now doesn't have the time dimension and the 1D file I'm getting when extracting timeseries from a ROI mask applied to this functional only has one number in it (884.727 [264 voxels]). Isn't it supposed to be a column of averaged voxel intensities in this ROI for each time point? Where did all the volumes go? When I try to run 3dfim+ I receive the following error:
Matrix error: Incompatible dimensions for vector multiplication: 0x0 X 1 ** Memory usage: chunks=84 bytes=621942 Standard error: ++ 3dfim+: AFNI version=AFNI_2011_05_26_1457 (Jun 22 2011) [64-bit] ++ Authored by: B. Douglas Ward 3dfim+ main Return code: 1 Interface Fim failed to run.
The command I try to run is
3dfim+ -input /path/mcorr_001_brain_3drefit.nii.gz -ideal_file /path/mcorr_001_brain_3drefit_maskave.1D -out Correlation -bucket corr_map.nii.gz
I tried to run the same but with the original oblique data and additional to the standard:
"WARNING: If you are performing spatial transformations on an oblique dset, such as /path/roi_sphere_30_21_18_flirt.nii.gz,or viewing/combining it with volumes of differing obliquity, you should consider running: 3dWarp -deoblique on this and other oblique datasets in the same session. "
There is still the error of "Matrix error: Incompatible dimensions for vector multiplication: 0x0 X 1" so it doesn't seem that deobliquing caused this problem!
Does anyone have any idea what might cause this problem?
check your input file (mcorr_001_brain_3drefit.nii.gz) with 3dinfo to see if it has a time axis (TR information). If not simply add a time axis with 3drefit.

What is CRC? And how does it help in error detection?

What is CRC? And how does it help in error detection?
CRC is a non-secure hash function designed to detect accidental changes to raw computer data, and is commonly used in digital networks and storage devices such as hard disk drives.
A CRC-enabled device calculates a short, fixed-length binary sequence, known as the CRC code, for each block of data and sends or stores them both together. When a block is read or received the device repeats the calculation; if the new CRC code does not match the one calculated earlier, then the block contains a data error and the device may take corrective action such as requesting the block be sent again.
Source: Wikipedia
CRC stands for Cyclic Redundancy Check.
it helps in error detection..
It consists of the following
b(x)-> transmitted code word
q(x)-> quotient
i(x)-> information polynomial
r(x)-> remainder polynomial
g(x)-> generated polynomial
step 1: x^(n-k) * i(x)
step 2: r(x) = (x^(n-k) * i(x))%g(x)
step 3: b(x) = (x^(n-k) * i(x)) XOR with r(x)
which results in a transmitted code word.
this b(x) is send to the reciever end from the sender and if u divide the
transmitted code word i.e. b(x) with g(x) and if the remainder
i.e. r(x) is equal to 0 at the reciever end then there is no error
otherwise there is an error in the transmitted code word during the
transmission from sender to reciever.
In this way it is helpful in error detection.
Cyclic Redundancy Check is a hash function that allows you to compute an unique value given some input which is guaranteed to be always the same for the same input. If the input changes somehow from the original, a different CRC checksum will be generated. So if you have an input and a checksum you could calculate a new checksum from the input and compare both checksums. If they are the same it means that the input hasn't changed.

Resources