How unique is UUID? - guid

How safe is it to use UUID to uniquely identify something (I'm using it for files uploaded to the server)? As I understand it, it is based off random numbers. However, it seems to me that given enough time, it would eventually repeat it self, just by pure chance. Is there a better system or a pattern of some type to alleviate this issue?

Very safe:
the annual risk of a given person being hit by a meteorite is
estimated to be one chance in 17 billion, which means the
probability is about 0.00000000006 (6 × 10−11), equivalent to the odds
of creating a few tens of trillions of UUIDs in a year and having one
duplicate. In other words, only after generating 1 billion UUIDs every
second for the next 100 years, the probability of creating just one
duplicate would be about 50%.
Caveat:
However, these probabilities only hold when the UUIDs are generated
using sufficient entropy. Otherwise, the probability of duplicates
could be significantly higher, since the statistical dispersion might
be lower. Where unique identifiers are required for distributed
applications, so that UUIDs do not clash even when data from many
devices is merged, the randomness of the seeds and generators used on
every device must be reliable for the life of the application. Where
this is not feasible, RFC4122 recommends using a namespace variant
instead.
Source: The Random UUID probability of duplicates section of the Wikipedia article on Universally unique identifiers (link leads to a revision from December 2016 before editing reworked the section).
Also see the current section on the same subject on the same Universally unique identifier article, Collisions.

If by "given enough time" you mean 100 years and you're creating them at a rate of a billion a second, then yes, you have a 50% chance of having a collision after 100 years.

There is more than one type of UUID, so "how safe" depends on which type (which the UUID specifications call "version") you are using.
Version 1 is the time based plus MAC address UUID. The 128-bits contains 48-bits for the network card's MAC address (which is uniquely assigned by the manufacturer) and a 60-bit clock with a resolution of 100 nanoseconds. That clock wraps in 3603 A.D. so these UUIDs are safe at least until then (unless you need more than 10 million new UUIDs per second or someone clones your network card). I say "at least" because the clock starts at 15 October 1582, so you have about 400 years after the clock wraps before there is even a small possibility of duplications.
Version 4 is the random number UUID. There's six fixed bits and the rest of the UUID is 122-bits of randomness. See Wikipedia or other analysis that describe how very unlikely a duplicate is.
Version 3 is uses MD5 and Version 5 uses SHA-1 to create those 122-bits, instead of a random or pseudo-random number generator. So in terms of safety it is like Version 4 being a statistical issue (as long as you make sure what the digest algorithm is processing is always unique).
Version 2 is similar to Version 1, but with a smaller clock so it is going to wrap around much sooner. But since Version 2 UUIDs are for DCE, you shouldn't be using these.
So for all practical problems they are safe. If you are uncomfortable with leaving it up to probabilities (e.g. your are the type of person worried about the earth getting destroyed by a large asteroid in your lifetime), just make sure you use a Version 1 UUID and it is guaranteed to be unique (in your lifetime, unless you plan to live past 3603 A.D.).
So why doesn't everyone simply use Version 1 UUIDs? That is because Version 1 UUIDs reveal the MAC address of the machine it was generated on and they can be predictable -- two things which might have security implications for the application using those UUIDs.

The answer to this may depend largely on the UUID version.
Many UUID generators use a version 4 random number. However, many of these use Pseudo a Random Number Generator to generate them.
If a poorly seeded PRNG with a small period is used to generate the UUID I would say it's not very safe at all. Some random number generators also have poor variance. i.e. favouring certain numbers more often than others. This isn't going to work well.
Therefore, it's only as safe as the algorithms used to generate it.
On the flip side, if you know the answer to these questions then I think a version 4 uuid should be very safe to use. In fact I'm using it to identify blocks on a network block file system and so far have not had a clash.
In my case, the PRNG I'm using is a mersenne twister and I'm being careful with the way it's seeded which is from multiple sources including /dev/urandom. Mersenne twister has a period of 2^19937 − 1. It's going to be a very very long time before I see a repeat uuid.
So pick a good library or generate it yourself and make sure you use a decent PRNG algorithm.

For UUID4 I make it that there are approximately as many IDs as there are grains of sand in a cube-shaped box with sides 360,000km long. That's a box with sides ~2 1/2 times longer than Jupiter's diameter.
Working so someone can tell me if I've messed up units:
volume of grain of sand 0.00947mm^3 (Guardian)
UUID4 has 122 random bits -> 5.3e36 possible values (wikipedia)
volume of that many grains of sand = 5.0191e34 mm^3 or 5.0191e+25m^3
side length of cubic box with that volume = 3.69E8m or 369,000km
diameter of Jupiter: 139,820km (google)

I concur with the other answers. UUIDs are safe enough for nearly all practical purposes1, and certainly for yours.
But suppose (hypothetically) that they aren't.
Is there a better system or a pattern of some type to alleviate this issue?
Here are a couple of approaches:
Use a bigger UUID. For instance, instead of a 128 random bits, use 256 or 512 or ... Each bit you add to a type-4 style UUID will reduce the probability of a collision by a half, assuming that you have a reliable source of entropy2.
Build a centralized or distributed service that generates UUIDs and records each and every one it has ever issued. Each time it generates a new one, it checks that the UUID has never been issued before. Such a service would be technically straight-forward to implement (I think) if we assumed that the people running the service were absolutely trustworthy, incorruptible, etcetera. Unfortunately, they aren't ... especially when there is the possibility of governments' security organizations interfering. So, this approach is probably impractical, and may be3 impossible in the real world.
1 - If uniqueness of UUIDs determined whether nuclear missiles got launched at your country's capital city, a lot of your fellow citizens would not be convinced by "the probability is extremely low". Hence my "nearly all" qualification.
2 - And here's a philosophical question for you. Is anything ever truly random? How would we know if it wasn't? Is the universe as we know it a simulation? Is there a God who might conceivably "tweak" the laws of physics to alter an outcome?
3 - If anyone knows of any research papers on this problem, please comment.

Quoting from Wikipedia:
Thus, anyone can create a UUID and use
it to identify something with
reasonable confidence that the
identifier will never be
unintentionally used by anyone for
anything else
It goes on to explain in pretty good detail on how safe it actually is. So to answer your question: Yes, it's safe enough.

UUID schemes generally use not only a pseudo-random element, but also the current system time, and some sort of often-unique hardware ID if available, such as a network MAC address.
The whole point of using UUID is that you trust it to do a better job of providing a unique ID than you yourself would be able to do. This is the same rationale behind using a 3rd party cryptography library rather than rolling your own. Doing it yourself may be more fun, but it's typically less responsible to do so.

Been doing it for years. Never run into a problem.
I usually set up my DB's to have one table that contains all the keys and the modified dates and such. Haven't run into a problem of duplicate keys ever.
The only drawback that it has is when you are writing some queries to find some information quickly you are doing a lot of copying and pasting of the keys. You don't have the short easy to remember ids anymore.

Here's a testing snippet for you to test it's uniquenes.
inspired by #scalabl3's comment
Funny thing is, you could generate 2 in a row that were identical, of course at mind-boggling levels of coincidence, luck and divine intervention, yet despite the unfathomable odds, it's still possible! :D Yes, it won't happen. just saying for the amusement of thinking about that moment when you created a duplicate! Screenshot video! – scalabl3 Oct 20 '15 at 19:11
If you feel lucky, check the checkbox, it only checks the currently generated id's. If you wish a history check, leave it unchecked.
Please note, you might run out of ram at some point if you leave it unchecked. I tried to make it cpu friendly so you can abort quickly when needed, just hit the run snippet button again or leave the page.
Math.log2 = Math.log2 || function(n){ return Math.log(n) / Math.log(2); }
Math.trueRandom = (function() {
var crypt = window.crypto || window.msCrypto;
if (crypt && crypt.getRandomValues) {
// if we have a crypto library, use it
var random = function(min, max) {
var rval = 0;
var range = max - min;
if (range < 2) {
return min;
}
var bits_needed = Math.ceil(Math.log2(range));
if (bits_needed > 53) {
throw new Exception("We cannot generate numbers larger than 53 bits.");
}
var bytes_needed = Math.ceil(bits_needed / 8);
var mask = Math.pow(2, bits_needed) - 1;
// 7776 -> (2^13 = 8192) -1 == 8191 or 0x00001111 11111111
// Create byte array and fill with N random numbers
var byteArray = new Uint8Array(bytes_needed);
crypt.getRandomValues(byteArray);
var p = (bytes_needed - 1) * 8;
for(var i = 0; i < bytes_needed; i++ ) {
rval += byteArray[i] * Math.pow(2, p);
p -= 8;
}
// Use & to apply the mask and reduce the number of recursive lookups
rval = rval & mask;
if (rval >= range) {
// Integer out of acceptable range
return random(min, max);
}
// Return an integer that falls within the range
return min + rval;
}
return function() {
var r = random(0, 1000000000) / 1000000000;
return r;
};
} else {
// From http://baagoe.com/en/RandomMusings/javascript/
// Johannes Baagøe <baagoe#baagoe.com>, 2010
function Mash() {
var n = 0xefc8249d;
var mash = function(data) {
data = data.toString();
for (var i = 0; i < data.length; i++) {
n += data.charCodeAt(i);
var h = 0.02519603282416938 * n;
n = h >>> 0;
h -= n;
h *= n;
n = h >>> 0;
h -= n;
n += h * 0x100000000; // 2^32
}
return (n >>> 0) * 2.3283064365386963e-10; // 2^-32
};
mash.version = 'Mash 0.9';
return mash;
}
// From http://baagoe.com/en/RandomMusings/javascript/
function Alea() {
return (function(args) {
// Johannes Baagøe <baagoe#baagoe.com>, 2010
var s0 = 0;
var s1 = 0;
var s2 = 0;
var c = 1;
if (args.length == 0) {
args = [+new Date()];
}
var mash = Mash();
s0 = mash(' ');
s1 = mash(' ');
s2 = mash(' ');
for (var i = 0; i < args.length; i++) {
s0 -= mash(args[i]);
if (s0 < 0) {
s0 += 1;
}
s1 -= mash(args[i]);
if (s1 < 0) {
s1 += 1;
}
s2 -= mash(args[i]);
if (s2 < 0) {
s2 += 1;
}
}
mash = null;
var random = function() {
var t = 2091639 * s0 + c * 2.3283064365386963e-10; // 2^-32
s0 = s1;
s1 = s2;
return s2 = t - (c = t | 0);
};
random.uint32 = function() {
return random() * 0x100000000; // 2^32
};
random.fract53 = function() {
return random() +
(random() * 0x200000 | 0) * 1.1102230246251565e-16; // 2^-53
};
random.version = 'Alea 0.9';
random.args = args;
return random;
}(Array.prototype.slice.call(arguments)));
};
return Alea();
}
}());
Math.guid = function() {
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.trueRandom() * 16 | 0,
v = c == 'x' ? r : (r & 0x3 | 0x8);
return v.toString(16);
});
};
function logit(item1, item2) {
console.log("Do "+item1+" and "+item2+" equal? "+(item1 == item2 ? "OMG! take a screenshot and you'll be epic on the world of cryptography, buy a lottery ticket now!":"No they do not. shame. no fame")+ ", runs: "+window.numberofRuns);
}
numberofRuns = 0;
function test() {
window.numberofRuns++;
var x = Math.guid();
var y = Math.guid();
var test = x == y || historyTest(x,y);
logit(x,y);
return test;
}
historyArr = [];
historyCount = 0;
function historyTest(item1, item2) {
if(window.luckyDog) {
return false;
}
for(var i = historyCount; i > -1; i--) {
logit(item1,window.historyArr[i]);
if(item1 == history[i]) {
return true;
}
logit(item2,window.historyArr[i]);
if(item2 == history[i]) {
return true;
}
}
window.historyArr.push(item1);
window.historyArr.push(item2);
window.historyCount+=2;
return false;
}
luckyDog = false;
document.body.onload = function() {
document.getElementById('runit').onclick = function() {
window.luckyDog = document.getElementById('lucky').checked;
var val = document.getElementById('input').value
if(val.trim() == '0') {
var intervaltimer = window.setInterval(function() {
var test = window.test();
if(test) {
window.clearInterval(intervaltimer);
}
},0);
}
else {
var num = parseInt(val);
if(num > 0) {
var intervaltimer = window.setInterval(function() {
var test = window.test();
num--;
if(num < 0 || test) {
window.clearInterval(intervaltimer);
}
},0);
}
}
};
};
Please input how often the calulation should run. set to 0 for forever. Check the checkbox if you feel lucky.<BR/>
<input type="text" value="0" id="input"><input type="checkbox" id="lucky"><button id="runit">Run</button><BR/>

I don't know if this matters to you, but keep in mind that GUIDs are globally unique, but substrings of GUIDs aren't.

I should mention I bought two external Seagate drives on Amazon, and they had the same device UUID, but differing PARTUUID. Presumably the cloning software they used to format the drives just copied the UUID as well.
Obviously UUID collisions are much more likely to happen due to a flawed cloning or copying process than from random coincidence. Bear that in mind when calculating UUID risks.

Related

How to find prime number on O(1) runtime

I got this question in an interview
Please provide a solution to check if a number is a prime number using
a loop of one - O(1). The input number can be between 1 and 10,000
only.
I said that its impossible unless if you have stored all prime numbers up to 10,000. Now I am not entirely sure whether my answer was correct. I tried to search for an answer on internet and the best I came up with AKS algorithm with run-time of O((log n)^6)
it is doable using SoE (Sieve of Eratosthenes). Its result is an array of bools usually encoded as single bit in BYTE/WORD/DWORD array for better density of storage. Also usually only the odd numbers are stored as the even except 2 are all not primes. Usually true value means it is not prime....
So the naive O(1) C++ code for checking x would look like:
bool SoE[10001]; // precomputed sieve array
int x = 27; // any x <0,10000>
bool x_is_prime = !SoE[x];
if the SoE is encoded as 8 bit BYTE array you need to tweak the access a bit:
BYTE SoE[1251]; // precomputed sieve array ceil(10001/8)
int x = 27; // any x <0,10000>
BYTE x_is_prime = SoE[x>>3]^(1<<(x&7));
of coarse constructing SoE is not O(1) !!! Here an example heavily using it to speedup mine IsPrime function:
Prime numbers by Eratosthenes quicker sequential than concurrently?
YES!,
You can use Sieve of Eratosthenes to check if number is a prime or not,
However you will have to precompute for certain number of value and store it in the array and for each query you can check in O(1).
If you do not want to precompute as it will take O(log(long)) time , then you can use this Concept ,
if P is a Prime Number , then P^2 - 1 is divisible by 24.
So in case of C++ , if the given number is less than or equal to 10^9 , we can use this concept.
The Source to this Concept can be learned at www.brilliant.org
public static boolean prime(int n) {
if(n%2 == 0)
return true;
else if(n%3 == 0)
return true;
else if(n%5 == 0)
return true;
else if(n%7 == 0)
return true;
return false;
}

what is the difference of these two implementations of a recursion algorihtm?

I am doing a leetcode problem.
A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below).
How many possible unique paths are there?
So I tried this implementation first and got a "exceeds runtime" (I forgot the exact term but it means the implementation is slow). So I changed it version 2, which use a array to save the results. I honestly don't know how the recursion works internally and why these two implementations have different efficiency.
version 1(slow):
class Solution {
// int res[101][101]={{0}};
public:
int uniquePaths(int m, int n) {
if (m==1 || n==1) return 1;
else{
return uniquePaths(m-1,n) + uniquePaths(m,n-1);
}
}
};
version2 (faster):
class Solution {
int res[101][101]={{0}};
public:
int uniquePaths(int m, int n) {
if (m==1 || n==1) return 1;
else{
if (res[m-1][n]==0) res[m-1][n] = uniquePaths(m-1,n);
if (res[m][n-1]==0) res[m][n-1] = uniquePaths(m,n-1);
return res[m-1][n] + res[m][n-1];
}
}
};
Version 1 is slower beacuse you are calculating the same data again and again. I'll try to explain this on different problem but I guess that you know Fibonacci numbers. You can calculate any Fibonacci number by following recursive algorithm:
fib(n):
if n == 0 then return 0
if n == 1 then return 1
return fib(n-1) + fib(n-1)
But what actually are you calculating? If you want to find fib(5) you need to calculate fib(4) and fib(3), then to calculate fib(4) you need to calculate fib(3) again! Take a look at the image to fully understand:
The same situation is in your code. You compute uniquePaths(m,n) even if you have it calculated before. To avoid that, in your second version you use array to store computed data and you don't have to compute it again when res[m][n]!=0

Minimum number of jumps required to climb stairs

I recently had an interview with Microsoft for an internship and I was asked this question in the interview.
Its basically like, you have 2 parallel staircases and both the staircases have n steps. You start from the bottom and you may move upwards on either of the staircases. Each step on the staircase has a penalty attached to it.
You can also move across both the staircases with some other penalty.
I had to find the minimum penalty that will be imposed for reaching the top.
I tried writing a recurrence relation but I couldn't write anything because of so many variables.
I recently read about dynamic programming and I think this question is related to that.
With some googling, I found that this question is the same as
https://www.hackerrank.com/contests/frost-byte-final/challenges/stairway
Can you please give a solution or an approach for this problem ?
Create two arrays to keep track of the minimal cost to reach every position. Fill both arrays with huge numbers (e.g. 1000000000) and the start of the arrays with the cost of the first step.
Then iterate over all possible steps, and use an inner loop to iterate over all possible jumps.
foreach step in (0, N) {
// we're now sure we know minimal cost to reach this step
foreach jump in (1,K) {
// updating minimal costs here
}
}
Now every time we reach updating there are 4 possible moves to consider:
from A[step] to A[step+jump]
from A[step] to B[step+jump]
from B[step] to A[step+jump]
from B[step] to B[step+jump]
For each of these moves you need to compute the cost. Because you already know that you have the optimal cost to reach A[step] and B[step] this is easy. It's not guaranteed this new move is an improvement, so only update the target cost in your array if the new cost is lower then the cost already there.
Isn't this just a directed graph search? Any kind of simple pathfinding algorithm could handle this. See
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
https://en.wikipedia.org/wiki/A*_search_algorithm
Just make sure you enforce the directional of the stairs (up only) and account for penalties (edge weights).
Worked solution
Of course, you could do it with dynamic programming, but I wouldn't be the one to ask for that...
import java.io.;
import java.util.;
public class Main {
public static int csmj(int []a,int src,int[] dp){
if(src>=a.length){
return Integer.MAX_VALUE-1;
}
if(src==a.length-1){
return 0;
}
if(dp[src]!=0){
return dp[src];
}
int count=Integer.MAX_VALUE-1;
for(int i=1;i<=a[src];i++){
count = Math.min( count , csmj(a,src+i,dp)+1 );
}
dp[src] = count;
return count;
}
public static void main(String args[] ) throws Exception {
Scanner s = new Scanner(System.in);
int n = s.nextInt();
int a[] = new int[n];
for(int i=0;i<n;i++){
a[i] = s.nextInt();
}
int minJumps = csmj(a,0,new int[n]);
System.out.println(minJumps);
}
}
bro you can have look at that solution my intuition is that

OutOfMemoryError during heuristic search

I'm writing a program to solve an 8 tile sliding puzzle for an AI class. in theory this is pretty easy, but the number of node states generated is pretty large (estimated 180,000 or so). We're comparing different heuristic functions in class, so my code has to be able to handle even some very inefficient functions. I'm getting "OutOfMemoryError: Java heap space" when using java's PriorityQueue class. Heres the relevant code withing my solver function: (the error is on the openList.add(temp); line)
public void solve(char[] init,int searchOrder)
{
State initial = new State(init,searchOrder); //create initial state
openList = new PriorityQueue<State>(); //create open list
closedList = new LinkedList<State>(); // create closed list
generated = new HashSet(); //Keeps track of all nodes generated to cut down search time
openList.add(initial); //add initial state to the open list
State expanded,temp = null,solution = null; //State currently being expanded
int nodesStored = 0, nodesExpanded = 0;
boolean same; //used for checking for state redundancy
TreeGeneration:
while(openList.size() > 0)
{
expanded = openList.poll();
closedList.addLast(expanded);
for (int k = 0; k < 4; k++)
{
if (k == 0)
{
temp = expanded.moveLeft();
}
else if (k == 1)
{
temp = expanded.moveRight();
}
else if (k == 2)
{
temp = expanded.moveAbove();
}
else
{
temp = expanded.moveBelow();
}
if(temp.isSolution())
{
solution = temp;
nodesStored = openList.size() + closedList.size();
nodesExpanded = closedList.size();
break TreeGeneration;
}
if(!generated.contains(temp))
{
// System.out.println(temp.toString());
openList.add(temp); // error here
generated.add(temp);
}
// System.out.println(openList.toString());
}
}
Am I doing something wrong here, or should I be using something else to handle this quantity of data? Thanks.
By default, JVM starts with 64 MB heap space, you can increase this amount by passing a parameter like below;
java -Xmx1024m YOUR_CLASS
this gives 1024 MB heap space in memory, you can change the amount of memory as you need.
If you are using NetBeans, Netbeans doesn't scale heap space automatically, you can achieve this by following below steps;
1- Right click on your project
2- Navigate to Set Configuration -> Customize
3-Add -Xmx256m into VM Options then click Ok
Now, you can run your project with custom heap space.

Modifying motion vectors in ffmpeg H.264 decoder

For research purposes, I am trying to modify H.264 motion vectors (MVs) for each P- and B-frame prior to motion compensation during the decoding process. I am using FFmpeg for this purpose. An example of a modification is replacing each MV with its original spatial neighbors and then using the resultant MVs for motion compensation, rather than the original ones. Please direct me appropriately.
So far, I have been able to do a simple modification of MVs in the file /libavcodec/h264_cavlc.c. In the function, ff_h264_decode_mb_cavlc(), modifying the mx and my variables, for instance, by increasing their values modifies the MVs used during decoding.
For example, as shown below, the mx and my values are increased by 50, thus lengthening the MVs used in the decoder.
mx += get_se_golomb(&s->gb)+50;
my += get_se_golomb(&s->gb)+50;
However, in this regard, I don't know how to access the neighbors of mx and my for my spatial mean analysis that I mentioned in the first paragraph. I believe that the key to doing so lies in manipulating the array, mv_cache.
Another experiment that I performed was in the file, libavcodec/error_resilience.c. Based on the guess_mv() function, I created a new function, mean_mv() that is executed in ff_er_frame_end() within the first if-statement. That first if-statement exits the function ff_er_frame_end() if one of the conditions is a zero error-count (s->error_count == 0). However, I decided to insert my mean_mv() function at this point so that is always executed when there is a zero error-count. This experiment somewhat yielded the results I wanted as I could start seeing artifacts in the top portions of the video but they were restricted just to the upper-right corner. I'm guessing that my inserted function is not being completed so as to meet playback deadlines or something.
Below is the modified if-statement. The only addition is my function, mean_mv(s).
if(!s->error_recognition || s->error_count==0 || s->avctx->lowres ||
s->avctx->hwaccel ||
s->avctx->codec->capabilities&CODEC_CAP_HWACCEL_VDPAU ||
s->picture_structure != PICT_FRAME || // we dont support ER of field pictures yet, though it should not crash if enabled
s->error_count==3*s->mb_width*(s->avctx->skip_top + s->avctx->skip_bottom)) {
//av_log(s->avctx, AV_LOG_DEBUG, "ff_er_frame_end in er.c\n"); //KG
if(s->pict_type==AV_PICTURE_TYPE_P)
mean_mv(s);
return;
And here's the mean_mv() function I created based on guess_mv().
static void mean_mv(MpegEncContext *s){
//uint8_t fixed[s->mb_stride * s->mb_height];
//const int mb_stride = s->mb_stride;
const int mb_width = s->mb_width;
const int mb_height= s->mb_height;
int mb_x, mb_y, mot_step, mot_stride;
//av_log(s->avctx, AV_LOG_DEBUG, "mean_mv\n"); //KG
set_mv_strides(s, &mot_step, &mot_stride);
for(mb_y=0; mb_y<s->mb_height; mb_y++){
for(mb_x=0; mb_x<s->mb_width; mb_x++){
const int mb_xy= mb_x + mb_y*s->mb_stride;
const int mot_index= (mb_x + mb_y*mot_stride) * mot_step;
int mv_predictor[4][2]={{0}};
int ref[4]={0};
int pred_count=0;
int m, n;
if(IS_INTRA(s->current_picture.f.mb_type[mb_xy])) continue;
//if(!(s->error_status_table[mb_xy]&MV_ERROR)){
//if (1){
if(mb_x>0){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy-1)];
pred_count++;
}
if(mb_x+1<mb_width){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index + mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy+1)];
pred_count++;
}
if(mb_y>0){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy-s->mb_stride)];
pred_count++;
}
if(mb_y+1<mb_height){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy+s->mb_stride)];
pred_count++;
}
if(pred_count==0) continue;
if(pred_count>=1){
int sum_x=0, sum_y=0, sum_r=0;
int k;
for(k=0; k<pred_count; k++){
sum_x+= mv_predictor[k][0]; // Sum all the MVx from MVs avail. for EC
sum_y+= mv_predictor[k][1]; // Sum all the MVy from MVs avail. for EC
sum_r+= ref[k];
// if(k && ref[k] != ref[k-1])
// goto skip_mean_and_median;
}
mv_predictor[pred_count][0] = sum_x/k;
mv_predictor[pred_count][1] = sum_y/k;
ref [pred_count] = sum_r/k;
}
s->mv[0][0][0] = mv_predictor[pred_count][0];
s->mv[0][0][1] = mv_predictor[pred_count][1];
for(m=0; m<mot_step; m++){
for(n=0; n<mot_step; n++){
s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][0] = s->mv[0][0][0];
s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][1] = s->mv[0][0][1];
}
}
decode_mb(s, ref[pred_count]);
//}
}
}
}
I would really appreciate some assistance on how to go about this properly.
It's been a long time i have been out of touch with FFMPEG's code internally.
However, given my experience with inside FFMPEG horrors (you would know what i mean), i would rather give you a simple pragmatic advice.
Suggestion #1
Best possibility is that when motion vector of each of the blocks are identified - you can create your own additional array inside FFMPEG encoder context (a.k.a s) which will store all of them. When your algorithm runs it will pick up the values from there.
Suggestion #2
Another thing i read (i am not sure if i read it right)
the mx and my values are increased by 50
I think 50 is a very large motion vector. And usually, the F-value range of motion vector encoding would be prior restrictive. If you alter things by +/- 8 (or even +/- 16) might just be ok- but +50 could be so high that end result may not encode things properly.
I didn't quite understood your objective about mean_mv() and what failure you expect from there. Please re-phrase a bit.

Resources