I would like to generate an "uncompressable" data sequence of X MBytes through an algorithm. I want it that way in order to create a program that measures the network speed through VPN connection (avoiding vpn built-in compression).
Can anybody help me? Thanks!
PS. I need an algorithm, I have used a file compressed to the point that cannot be compressed anymore, but now I need to generate the data sequence from scratch programatically.
White noise data is truly random and thus incompressible.
Therefore, you should find an algorithm that generates it (or an approximation).
Try this in Linux:
# dd if=/dev/urandom bs=1024 count=10000 2>/dev/null | bzip2 -9 -c -v > /dev/null
(stdin): 0.996:1, 8.035 bits/byte, -0.44% saved, 10240000 in, 10285383 out.
You might try any kind of random number generation though...
One simple approach to creating statistically hard-to-compress data is just to use a random number generator. If you need it to be repeatable, fix the seed. Any reasonably good random number generator will do. Ironically, the result is incredibly compressible if you know the random number generator: the only information present is the seed. However, it will defeat any real compression method.
Other answers have pointed out that random noise is incompressible, and good encryption functions have output that is as close as possible to random noise (unless you know the decryption key). So a good approach could be to just use random number generators or encryption algorithms to generate your incompressible data.
Genuinely incompressible (by any compression algorithm) bitstrings exist (for certain formal definitions of "incompressible"), but even recognising them is computationally undecidable, let alone generating them.
It's worth pointing out though that "random data" is only incompressible in that there is no compression algorithm that can achieve a compression ratio of better than 1:1 on average over all possible random data. However, for any particular randomly generated string, there may be a particular compression algorithm that does achieve a good compression ratio. After all, any compressible string should be possible output from a random generator, including stupid things like all zeroes, however unlikely.
So while the possibility of getting "compressible" data out of a random number generator or an encryption algorithm is probably vanishingly small, I would want to actually test the data before I use it. If you have access to the compression algorithm(s) used in the VPN connection that would be best; just randomly generate data until you get something that won't compress. Otherwise, just running it through a few common compression tools and checking that the size doesn't decrease would probably be sufficient.
You have a couple of options:
1. Use a decent pseudo-random number generator
2. Use an encryption function like AES (implementations found everywhere)
Algo
Come up with whatever key you want. All zeroes is fine.
Create an empty block
Encrypt the block using the key
Output the block
If you need more data, goto 3
If done correctly, the datastream you generate will be mathematically indistinguishable from random noise.
The following program (C/POSIX) produces incompressible data quickly, it should be in the gigabytes per second range. I'm sure it's possible to use the general idea to make it even faster (maybe using Djb's ChaCha core with SIMD?).
/* public domain, 2013 */
#include <stdint.h>
#include <stdlib.h>
#include <unistd.h>
#include <time.h>
#define R(a,b) (((a) << (b)) | ((a) >> (32 - (b))))
static void salsa_scrambler(uint32_t out[16], uint32_t x[16])
{
int i;
/* This is a quickly mutilated Salsa20 of only 1 round */
x[ 4] ^= R(x[ 0] + x[12], 7);
x[ 8] ^= R(x[ 4] + x[ 0], 9);
x[12] ^= R(x[ 8] + x[ 4], 13);
x[ 0] ^= R(x[12] + x[ 8], 18);
x[ 9] ^= R(x[ 5] + x[ 1], 7);
x[13] ^= R(x[ 9] + x[ 5], 9);
x[ 1] ^= R(x[13] + x[ 9], 13);
x[ 5] ^= R(x[ 1] + x[13], 18);
x[14] ^= R(x[10] + x[ 6], 7);
x[ 2] ^= R(x[14] + x[10], 9);
x[ 6] ^= R(x[ 2] + x[14], 13);
x[10] ^= R(x[ 6] + x[ 2], 18);
x[ 3] ^= R(x[15] + x[11], 7);
x[ 7] ^= R(x[ 3] + x[15], 9);
x[11] ^= R(x[ 7] + x[ 3], 13);
x[15] ^= R(x[11] + x[ 7], 18);
for (i = 0; i < 16; ++i)
out[i] = x[i];
}
#define CHUNK 2048
int main(void)
{
uint32_t bufA[CHUNK];
uint32_t bufB[CHUNK];
uint32_t *input = bufA, *output = bufB;
int i;
/* Initialize seed */
srand(time(NULL));
for (i = 0; i < CHUNK; i++)
input[i] = rand();
while (1) {
for (i = 0; i < CHUNK/16; i++) {
salsa_scrambler(output + 16*i, input + 16*i);
}
write(1, output, sizeof(bufA));
{
uint32_t *tmp = output;
output = input;
input = tmp;
}
}
return 0;
}
A very simple solution is to generate a random string and then compress it.
An already compressed file is incompressible.
For copy-paste lovers here some C# code to generate files with (almost) uncompressable content. The heart of the code is the MD5 hashing algorithm but any cryptographically strong (good random distribution in final result) hash algorithm does the job (SHA1, SHA256, etc).
It just use the file number bytes (32 bit little endian signed integer in my machine) as an hash function's initial input and reshashes and concatenates the output until the desired file size reached. So the file content is deterministic (same number always generates same output) randomly distributed "junk" for the compression algorithm under test.
using System;
using System.IO;
using System.Linq;
using System.Security.Cryptography;
class Program {
static void Main( string [ ] args ) {
GenerateUncompressableTestFiles(
outputDirectory : Path.GetFullPath( "." ),
fileNameTemplate : "test-file-{0}.dat",
fileCount : 10,
fileSizeAsBytes : 16 * 1024
);
byte[] bytes = GetIncompressibleBuffer( 16 * 1024 );
}//Main
static void GenerateUncompressableTestFiles( string outputDirectory, string fileNameTemplate, int fileCount, int fileSizeAsBytes ) {
using ( var md5 = MD5.Create() ) {
for ( int number = 1; number <= fileCount; number++ ) {
using ( var content = new MemoryStream() ) {
var inputBytes = BitConverter.GetBytes( number );
while ( content.Length <= fileSizeAsBytes ) {
var hashBytes = md5.ComputeHash( inputBytes );
content.Write( hashBytes );
inputBytes = hashBytes;
if ( content.Length >= fileSizeAsBytes ) {
var file = Path.Combine( outputDirectory, String.Format( fileNameTemplate, number ) );
File.WriteAllBytes( file, content.ToArray().Take( fileSizeAsBytes ).ToArray() );
}
}//while
}//using
}//for
}//using
}//GenerateUncompressableTestFiles
public static byte[] GetIncompressibleBuffer( int size, int seed = 0 ) {
using ( var md5 = MD5.Create() ) {
using ( var content = new MemoryStream() ) {
var inputBytes = BitConverter.GetBytes( seed );
while ( content.Length <= size ) {
var hashBytes = md5.ComputeHash( inputBytes );
content.Write( hashBytes );
inputBytes = hashBytes;
if ( content.Length >= size ) {
return content.ToArray().Take( size ).ToArray();
}
}//while
}//using
}//using
return Array.Empty<byte>();
}//GetIncompressibleBuffer
}//class
I just created a (very simple and not optimized) C# console application that creates uncompressable files.
It scans a folder for textfiles (extension .txt) and creates a binary file (extension .bin) with the same name and size for each textfile.
Hope this helps someone.
Here is the C# code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var files = Directory.EnumerateFiles(#"d:\MyPath\To\TextFile\", "*.txt");
var random = new Random();
foreach (var fileName in files)
{
var fileInfo = new FileInfo(fileName);
var newFileName = Path.GetDirectoryName(fileName) + #"\" + Path.GetFileNameWithoutExtension(fileName) + ".bin";
using (var f = File.Create(newFileName))
{
long bytesWritten = 0;
while (bytesWritten < fileInfo.Length)
{
f.WriteByte((byte)random.Next());
bytesWritten++;
}
f.Close();
}
}
}
}
}
Related
For example, the hash value generated by sha-256 is directly used for a particular byte array like this.
void Run(CryptoArray& plain, CryptoArray& key)
{
uint8_t *hash = Sha256::FromArray(key.getArray(), key.getSize());
size_t round = plain.getSize() / 8;
size_t remain = plain.getSize() % 8;
for (size_t i = 0; i < round; i++)
((uint64_t *)plain.getArray())[i] ^= ((uint64_t *)hash)[i % 8];
uint8_t *arr = plain.getArray() + round * 8;
for (size_t i = 0; i < remain; i++)
arr[i] ^= hash[i];
}
If the 2^2048-bit computer is invented(or using the biginteger) in the distant future, this problem is parallelized and can be easily solved. However, I know that both this simple-hash-crpyto function and the common cryptographic function are intended to make it difficult to get the most out of the password.
If so, have developers added several additional features to increase the time it takes to get your passwords using these feature(AES, DES, ...)? If not, is there a fatal problem with using the hash function alone? All the programs I saw did not use the hash function like this.
I think reading up on the differences between Hashes and Encryption algorithms will be helpful.
I wrote a function to convert a hexa string representation (like x00) of some binary data to the data itself.
How to improve this code?
QByteArray restoreData(const QByteArray &data, const QString prepender = "x")
{
QByteArray restoredData = data;
return QByteArray::fromHex(restoredData.replace(prepender, ""));
}
How to improve this code?
Benchmark before optimizing this. Do not do premature optimization.
Beyond the main point: Why would you like to optimize it?
1) If you are really that concerned about performance where this negligible code from performance point of view matters, you would not use Qt in the first place because Qt is inherently slow compared to a well-optimized framework.
2) If you are not that concerned about performance, then you should keep the readability and maintenance in mind as leading principle, in which case your code is fine.
You have not shown any real world example either why exactly you want to optimize. This feels like an academic question without much pratical use to me. It would be interesting to know more about the motivation.
That being said, several improvement items, which are also optimization, could be done in your code, but then again: it is not done for optimization, but more like logical reasons.
1) Prepender is bad name; it is usually called "prefix" in the English language.
2) You wish to use QChar as opposed to QString for a character.
3) Similarly, for the replacement, you wish to use '' rather than the string'ish "" formula.
4) I would pass classes like that with reference as opposed to value semantics even if it is CoW (implicitly shared).
5) I would not even use an argument here for the prefix since it is always the same, so it does not really fit the definition of variable.
6) It is needless to create an interim variable explicitly.
7) Make the function inline.
Therefore, you would be writing something like this:
QByteArray restoreData(QByteArray data)
{
return QByteArray::fromHex(data.replace('x', ''));
}
Your code has a performance problem because of replace(). Replace itself is not very fast, and creating intermediate QByteArray object slows the code down even more. If you are really concerned about performance, you can copy QByteArray::fromHex implementation from Qt sources and modify it for your needs. Luckily, its implementation is quite self-contained. I only changed / 2 to / 3 and added --i line to skip "x" characters.
QByteArray myFromHex(const QByteArray &hexEncoded)
{
QByteArray res((hexEncoded.size() + 1)/ 3, Qt::Uninitialized);
uchar *result = (uchar *)res.data() + res.size();
bool odd_digit = true;
for (int i = hexEncoded.size() - 1; i >= 0; --i) {
int ch = hexEncoded.at(i);
int tmp;
if (ch >= '0' && ch <= '9')
tmp = ch - '0';
else if (ch >= 'a' && ch <= 'f')
tmp = ch - 'a' + 10;
else if (ch >= 'A' && ch <= 'F')
tmp = ch - 'A' + 10;
else
continue;
if (odd_digit) {
--result;
*result = tmp;
odd_digit = false;
} else {
*result |= tmp << 4;
odd_digit = true;
--i;
}
}
res.remove(0, result - (const uchar *)res.constData());
return res;
}
Test:
qDebug() << QByteArray::fromHex("54455354"); // => "TEST"
qDebug() << myFromHex("x54x45x53x54"); // => "TEST"
This code can behave unexpectedly when hexEncoded is malformed (.e.g. "x54x45x5" will be converted to "TU"). You can fix this somehow if it's a problem.
I find it difficult to translate binary into picture, I use a pixmap.
transfer into the binary is correct but when I show using this program actually does not work.
this is my code:
if (binaryNumber[0]==1)ui->led16->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led16->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[1]=1) ui->led15->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led15->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[2]==1)ui->led14->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led14->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[3]==1)ui->led13->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led13->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[4]==1)ui->led12->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led12->setPixmap(QPixmap("../../picture/ball-gray.png"));
bool ok2 = false;
QByteArray binaryNumber = QByteArray::number(DO.toLongLong(&ok2, 16), 2);
qDebug()<<binaryNumber<<binaryNumber[0]<<binaryNumber[1]<<binaryNumber[2 <<binaryNumber[3];
i.e
binaryNumber =1011
binaryNumber[0] = 1
binaryNumber[1] = 0
binaryNumber[2] = 1
binaryNumber[3] = 1
but when
binaryNumber =100
binaryNumber[0] = 1
binaryNumber[1] = 0
binaryNumber[2] = 0
so when i use a pixmap, then led the flame does not correspond to the binary number because array [0] is different when the size is different.
is there any simple code for me?
Your use of a QByteArray to store bits of a number is unnecessary. In C/C++, you can access the bits directly by doing a bitwise AND (&) with a mask.
template <typename T> static QPixmap setPixmap(T * p, int value, int bitNo)
{
const bool bit = value & (1<<bitNo);
p->setPixmap(bit ? QPixmap("../../picture/ball-yellow.png")
: QPixmap("../../picture/ball-gray.png"));
}
void Class::setDisplay(int val)
{
setPixmap(ui->led12, val, 0);
setPixmap(ui->led13, val, 1);
setPixmap(ui->led14, val, 2);
setPixmap(ui->led15, val, 3);
setPixmap(ui->led16, val, 4);
}
Note that QByteArray::number() returns alphanumeric characters ('0' = 48, '1' = 49 etc.), not characters with the numerical values 0, 1 etc. This is an important difference!
If you do binaryNumber = QByteArray::number(value, 2), this returns a byte array like for example "1010". Thus, binaryNumber[0] == '1', NOT binaryNumber[0] == 1:
if (binaryNumber[0]=='1')ui->led16->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led16->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[1]=='1')ui->led15->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led15->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[2]=='1')ui->led14->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led14->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[3]=='1')ui->led13->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led13->setPixmap(QPixmap("../../picture/ball-gray.png"));
if (binaryNumber[4]=='1')ui->led12->setPixmap(QPixmap("../../picture/ball-yellow.png"));
else ui->led12->setPixmap(QPixmap("../../picture/ball-gray.png"));
Note that your code is riddled with redundant code lines, resulting in bad quality software. You should try to write the code above in a loop or at least move out the pixmaps. Moving the pixmap initialisations in some static variables or in the constructor of the containing class results in some performance boost, too.
So your class could look similar to this: (I only included the relevant parts, of course, there also has to be the code for the UI stuff.)
class LEDNumberView
{
private:
// member variables:
QPixmap bitOn;
QPixmap bitOff;
// helper function
inline QPixmap getBitPixmap(bool bitVal)
{
return bitVal ? bitOn : bitOff;
}
public:
// constructor
LEDNumberView()
{
QString path = "../../picture/ball-%1.png";
bitOn = QPixmap(path.arg("yellow"));
bitOff = QPixmap(path.arg("gray"));
}
// call whenever you want to change the binary number displayed by the LEDs
void setBinaryNumber(int value)
{
QByteArray binaryNumber = QByteArray::number(value, 2);
ui->led16->setPixmap(getBitPixmap(binaryNumber[0] == '1'));
ui->led15->setPixmap(getBitPixmap(binaryNumber[1] == '1'));
ui->led14->setPixmap(getBitPixmap(binaryNumber[2] == '1'));
ui->led13->setPixmap(getBitPixmap(binaryNumber[3] == '1'));
ui->led12->setPixmap(getBitPixmap(binaryNumber[4] == '1'));
ui->led11->setPixmap(getBitPixmap(binaryNumber[5] == '1'));
}
};
To combine the answer of Kuba Ober with mine, write the setBinaryNumber function as he suggested. It's up to you which method of binary conversion you prefer - bit manipulation (use his method) or convert to and then work with bytes (yours).
I have a USB weighing from stamps.com (Model 510: http://www.stamps.com/postage-online/digital-postage-scales/)
I was able to find the drivers to make it stand alone online, but my next question is how do I read the weight of the object on the scale in my classic ASP page / VBScript.
Does anyone have any suggestions where I should begin my search?
I'm not sure if this is applicable to your specific model but there's an article at http://nicholas.piasecki.name/blog/2008/11/reading-a-stamps-com-usb-scale-from-c-sharp/ where the author has written C# code to read from the scale because it conforms to basic USB HID (human input device) standards. The author made use of Mike OBrien's HID library https://github.com/mikeobrien/HidLibrary
They start off getting the raw bytes:
HidDeviceData inData;
HidDevice[] hidDeviceList;
HidDevice scale;
hidDeviceList = HidDevices.Enumerate(0x1446, 0x6A73);
if (hidDeviceList.Length > 0)
{
int waitTries;
scale = hidDeviceList[0];
waitTries = 0;
scale.Open();
if (scale.IsConnected)
{
inData = scale.Read(250);
for (int i = 0; i < inData.Data.Length; ++i)
{
Console.WriteLine("Byte {0}: {1:X}", i, inData.Data[i]);
}
}
scale.Close();
scale.Dispose();
}
Then go on to reverse engineer the payload and construct a function to get the weight in ounces:
private void GetStampsComModel2500iScaleWeight(out decimal? ounces, out bool? isStable)
{
HidDeviceData inData;
HidDevice[] hidDeviceList;
HidDevice scale;
isStable = null;
ounces = null;
hidDeviceList = HidDevices.Enumerate(0x1446, 0x6A73);
if (hidDeviceList.Length > 0)
{
int waitTries;
scale = hidDeviceList[0];
waitTries = 0;
scale.Open();
// For some reason, the scale isn't always immediately available
// after calling Open(). Let's wait for a few milliseconds before
// giving up.
while (!scale.IsConnected && waitTries < 10)
{
Thread.Sleep(50);
waitTries++;
}
if (scale.IsConnected)
{
inData = scale.Read(250);
ounces = (Convert.ToDecimal(inData.Data[4]) +
Convert.ToDecimal(inData.Data[5]) * 256) / 10;
isStable = inData.Data[1] == 0x4;
}
scale.Close();
scale.Dispose();
}
}
In order to read the weight from your classic ASP page/VBScript (on the server, right?) the easiest solution looks to be turning the working C# class into a COM component. There are tutorials you can follow to create the C# COM Component and register it on the server, then you would call it from VBScript like:
Dim app
Set app = Server.CreateObject("MyScaleComponent")
For research purposes, I am trying to modify H.264 motion vectors (MVs) for each P- and B-frame prior to motion compensation during the decoding process. I am using FFmpeg for this purpose. An example of a modification is replacing each MV with its original spatial neighbors and then using the resultant MVs for motion compensation, rather than the original ones. Please direct me appropriately.
So far, I have been able to do a simple modification of MVs in the file /libavcodec/h264_cavlc.c. In the function, ff_h264_decode_mb_cavlc(), modifying the mx and my variables, for instance, by increasing their values modifies the MVs used during decoding.
For example, as shown below, the mx and my values are increased by 50, thus lengthening the MVs used in the decoder.
mx += get_se_golomb(&s->gb)+50;
my += get_se_golomb(&s->gb)+50;
However, in this regard, I don't know how to access the neighbors of mx and my for my spatial mean analysis that I mentioned in the first paragraph. I believe that the key to doing so lies in manipulating the array, mv_cache.
Another experiment that I performed was in the file, libavcodec/error_resilience.c. Based on the guess_mv() function, I created a new function, mean_mv() that is executed in ff_er_frame_end() within the first if-statement. That first if-statement exits the function ff_er_frame_end() if one of the conditions is a zero error-count (s->error_count == 0). However, I decided to insert my mean_mv() function at this point so that is always executed when there is a zero error-count. This experiment somewhat yielded the results I wanted as I could start seeing artifacts in the top portions of the video but they were restricted just to the upper-right corner. I'm guessing that my inserted function is not being completed so as to meet playback deadlines or something.
Below is the modified if-statement. The only addition is my function, mean_mv(s).
if(!s->error_recognition || s->error_count==0 || s->avctx->lowres ||
s->avctx->hwaccel ||
s->avctx->codec->capabilities&CODEC_CAP_HWACCEL_VDPAU ||
s->picture_structure != PICT_FRAME || // we dont support ER of field pictures yet, though it should not crash if enabled
s->error_count==3*s->mb_width*(s->avctx->skip_top + s->avctx->skip_bottom)) {
//av_log(s->avctx, AV_LOG_DEBUG, "ff_er_frame_end in er.c\n"); //KG
if(s->pict_type==AV_PICTURE_TYPE_P)
mean_mv(s);
return;
And here's the mean_mv() function I created based on guess_mv().
static void mean_mv(MpegEncContext *s){
//uint8_t fixed[s->mb_stride * s->mb_height];
//const int mb_stride = s->mb_stride;
const int mb_width = s->mb_width;
const int mb_height= s->mb_height;
int mb_x, mb_y, mot_step, mot_stride;
//av_log(s->avctx, AV_LOG_DEBUG, "mean_mv\n"); //KG
set_mv_strides(s, &mot_step, &mot_stride);
for(mb_y=0; mb_y<s->mb_height; mb_y++){
for(mb_x=0; mb_x<s->mb_width; mb_x++){
const int mb_xy= mb_x + mb_y*s->mb_stride;
const int mot_index= (mb_x + mb_y*mot_stride) * mot_step;
int mv_predictor[4][2]={{0}};
int ref[4]={0};
int pred_count=0;
int m, n;
if(IS_INTRA(s->current_picture.f.mb_type[mb_xy])) continue;
//if(!(s->error_status_table[mb_xy]&MV_ERROR)){
//if (1){
if(mb_x>0){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy-1)];
pred_count++;
}
if(mb_x+1<mb_width){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index + mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy+1)];
pred_count++;
}
if(mb_y>0){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy-s->mb_stride)];
pred_count++;
}
if(mb_y+1<mb_height){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy+s->mb_stride)];
pred_count++;
}
if(pred_count==0) continue;
if(pred_count>=1){
int sum_x=0, sum_y=0, sum_r=0;
int k;
for(k=0; k<pred_count; k++){
sum_x+= mv_predictor[k][0]; // Sum all the MVx from MVs avail. for EC
sum_y+= mv_predictor[k][1]; // Sum all the MVy from MVs avail. for EC
sum_r+= ref[k];
// if(k && ref[k] != ref[k-1])
// goto skip_mean_and_median;
}
mv_predictor[pred_count][0] = sum_x/k;
mv_predictor[pred_count][1] = sum_y/k;
ref [pred_count] = sum_r/k;
}
s->mv[0][0][0] = mv_predictor[pred_count][0];
s->mv[0][0][1] = mv_predictor[pred_count][1];
for(m=0; m<mot_step; m++){
for(n=0; n<mot_step; n++){
s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][0] = s->mv[0][0][0];
s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][1] = s->mv[0][0][1];
}
}
decode_mb(s, ref[pred_count]);
//}
}
}
}
I would really appreciate some assistance on how to go about this properly.
It's been a long time i have been out of touch with FFMPEG's code internally.
However, given my experience with inside FFMPEG horrors (you would know what i mean), i would rather give you a simple pragmatic advice.
Suggestion #1
Best possibility is that when motion vector of each of the blocks are identified - you can create your own additional array inside FFMPEG encoder context (a.k.a s) which will store all of them. When your algorithm runs it will pick up the values from there.
Suggestion #2
Another thing i read (i am not sure if i read it right)
the mx and my values are increased by 50
I think 50 is a very large motion vector. And usually, the F-value range of motion vector encoding would be prior restrictive. If you alter things by +/- 8 (or even +/- 16) might just be ok- but +50 could be so high that end result may not encode things properly.
I didn't quite understood your objective about mean_mv() and what failure you expect from there. Please re-phrase a bit.