ASN1 OBJECT_IDENTIFIER decoding - decoding

I have the following ASN1 data
Sequence
Sequence
ObjectIdentifier
Sequence
Sequence
Integer
Integer
Sequence
Integer
Integer
My goal is to get the encoded integer values. My code so far is the following
ByteQueue queue(inputLen);
queue.Put2(input, inputLen, 0, false);
BERSequenceDecoder outer(queue);
BERSequenceDecoder discard(outer); // unnecessary sequence with object_identifier
BERSequenceDecoder obj(discard,
CryptoPP::ASNTag::OBJECT_IDENTIFIER | CryptoPP::ASNIdFlag::UNIVERSAL);
BERSequenceDecoder parent(outer); //BER decode error
for(int i = 0; i < 2; i++) {
BERSequenceDecoder dataSequence(parent);
Integer i1, i2;
i1.BERDecode(dataSequence);
i2.BERDecode(dataSequence);
Problem is, I don't know how to properly get past the object_identifier part, at least I think that is the problem. I'm getting BER decode error on the 4. decoder object.
Also, am I initializing the ByteQueue correctly? this Put2 method doesn't seem like the correct way, but I didn't find any other methods.

ByteQueue queue(inputLen);
queue.Put2(input, inputLen, 0, false);
You could also do something like:
ArraySource as(input, inputLen, false /*pumpAll*/);
as.TransferTo(queue);
Or, if you just want to copy them:
as.CopyTo(queue);
Problem is, I don't know how to properly get past the object_identifier part...
I would probably do something like:
byte b = as.Peek();
if(b == /*some tag*/)
as.Skip(n);
Or:
byte b = as.Peek();
if(b == /*some tag*/)
{
lword length;
bool definiteLength;
if(!BERLengthDecode(as, length, definiteLength))
throw BadParam();
as.Skip(length);
}
The source files with the goodies like above is asn.h and asn.cpp. The others you might be interested in include BERDecodeOctetString and BERDecodeBitString.

Related

Modified function not working as intended without recursion

I have a recursive function which iterates though directory trees listing the file names located in them.
Here is the function:
void WINAPI SearchFile(PSTR Directory)
{
HANDLE hFind;
WIN32_FIND_DATA FindData;
char SearchName[1024],FullPath[1024];
memset(SearchName,0,sizeof(SearchName));
memset(&FindData,0,sizeof(WIN32_FIND_DATA));
sprintf(SearchName,"%s\\*",Directory);
hFind=FindFirstFile(SearchName,&FindData);
if(hFind!=INVALID_HANDLE_VALUE)
{
while(FindNextFile(hFind,&FindData))
{
if(FindData.cFileName[0]=='.')
{
continue;
}
memset(FullPath,0,sizeof(FullPath));
sprintf(FullPath,"%s\\%s",Directory,FindData.cFileName);
if(FindData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
MessageBoxA(NULL, FullPath, "Directory", MB_OK);
SearchFile(FullPath);
}
else
{
MessageBoxA(NULL, FullPath, "File", MB_OK);
}
}
FindClose(hFind);
}
}
There are obviously differences between both functions but I don't understand what's making them act differently. Does anyone know why I am having this problem?
for fast understand error need look for line
goto label;
//SearchFile(FullPath);
at this point hFind containing valid data and FindClose(hFind); need be called for it. but after goto label; executed - your overwrite hFind with hFind = FindFirstFile(SearchName, &FindData); - so you already never close original hFind, never can return to iterate folder after such go to sub-folder. this is key point - need save original hFind before go to sub directory and restore it after. when you do recursive function call - this is done auto - because every sub directory in this case enumerated in self stack frame, which have separate hFind. this is native solution use recursion here.
but possible convert recursion to loop here because we call self always from the single place and as result to this single place. so we can not save return address in stack but do unconditional jump (goto) to known place.
then code have some extra errors, you never check for string buffers overflow, why 1024 as max length is hard-coded when file path can be up to 32768 chars, you not check for reparse point as result can enter to infinite loop, use FindFirstFile instead FindFirstFileEx, etc.
correct code for enumerate sub-folder in loop can be next
void DoEnum(PCWSTR pcszRoot)
{
SIZE_T FileNameLength = wcslen(pcszRoot);
// initial check for . and ..
switch (FileNameLength)
{
case 2:
if (pcszRoot[1] != '.') break;
case 1:
if (pcszRoot[0] == '.') return;
}
static const WCHAR mask[] = L"\\*";
WCHAR FileName[MAXSHORT + 1];
if (_countof(FileName) < FileNameLength + _countof(mask))
{
return;
}
ULONG dwError;
HANDLE hFindFile = 0;
WIN32_FIND_DATA FindData{};
enum { MaxDeep = 0x200 };
//++ stack
HANDLE hFindFileV[MaxDeep];
PWSTR pszV[MaxDeep];
char prefix[MaxDeep+1];
//--stack
ULONG Level = MaxDeep;
memset(prefix, '\t', MaxDeep);
prefix[MaxDeep] = 0;
PWSTR psz = FileName;
goto __enter;
__loop:
hFindFile = FindFirstFileEx(FileName, FindExInfoBasic, &FindData, FindExSearchNameMatch, 0, FIND_FIRST_EX_LARGE_FETCH);
if (hFindFile != INVALID_HANDLE_VALUE)
{
do
{
pcszRoot = FindData.cFileName;
// skip . and ..
switch (FileNameLength = wcslen(pcszRoot))
{
case 2:
if (pcszRoot[1] != '.') break;
case 1:
if (pcszRoot[0] == '.') continue;
}
if (FindData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
if ((SIZE_T)(FileName + _countof(FileName) - psz) < FileNameLength + _countof(mask))
{
continue;
}
__enter:
memcpy(psz, pcszRoot, (1 + FileNameLength) * sizeof(WCHAR));
if (FindData.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT)
{
DbgPrint("%sreparse point: <%S>\n", prefix + Level, pcszRoot);
}
else
{
if (Level)
{
DbgPrint("%s<%S>\n", prefix + Level, psz);
hFindFileV[--Level] = hFindFile;
pszV[Level] = psz;
memcpy(psz += FileNameLength, mask, sizeof(mask));
psz++;
goto __loop;
__return:
*--psz = 0;
psz = pszV[Level];
hFindFile = hFindFileV[Level++];
DbgPrint("%s</%S>\n", prefix + Level, psz);
}
}
}
else
{
DbgPrint("%s[%u%u] %S\n", prefix + Level, FindData.nFileSizeLow, FindData.nFileSizeHigh, pcszRoot);
}
if (!hFindFile)
{
// top level exit
return ;
}
} while (FindNextFile(hFindFile, &FindData));
if ((dwError = GetLastError()) == ERROR_NO_MORE_FILES)
{
dwError = NOERROR;
}
FindClose(hFindFile);
}
else
{
dwError = GetLastError();
}
if (dwError)
{
DbgPrint("<%S> err = %u\n", FileName, dwError);
}
goto __return;
}
The reason for the difference is actually the confusion brought to you by goto label.If you are using the recursive version, after the recursive execution is completed, it will return to the recursive place to continue execution.
In your code, you continue to execute while (FindNextFile(hFind, &FindData)), but when you use goto label, it will jump out of the original loop and restart the program from the label, which leads to what you said list a single directory tree before ending.
If you modify the modified code to the following iterative version, you can understand why there is such a problem.
void fun()
{
char* Directory = "D:\\test";
HANDLE hFind;
WIN32_FIND_DATA FindData;
char SearchName[1024], FullPath[1024];
char LastName[1024] = "";
while (1)
{
memset(SearchName, 0, sizeof(SearchName));
memset(&FindData, 0, sizeof(WIN32_FIND_DATA));
sprintf(SearchName, "%s\\*", Directory);
if (strcmp(SearchName, LastName) == 0)
{
return;
}
strcpy(LastName, SearchName);
hFind = FindFirstFile(SearchName, &FindData);
if (hFind != INVALID_HANDLE_VALUE)
{
while (FindNextFile(hFind, &FindData))
{
if (FindData.cFileName[0] == '.')
{
continue;
}
memset(FullPath, 0, sizeof(FullPath));
sprintf(FullPath, "%s\\%s", Directory, FindData.cFileName);
if (FindData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
MessageBoxA(NULL, Directory, "Directory", MB_OK);
char cArray[1024];
memset(cArray, 0, sizeof(cArray));
sprintf(cArray, "%s", FullPath);
Directory = cArray;
break;
}
else
{
MessageBoxA(NULL, FullPath, "File", MB_OK);
}
}
FindClose(hFind);
}
}
}
So you cannot achieve the same purpose as recursion by using goto, here you can only use recursion. Of course, I have provided a way to traverse directories non-recursively using queues, which is a more scientific way.
One of the key things that you obtain from recursion is a separate set of local variables for each call to the recursive function. When a function calls itself, and in the recursive call modifies local variables, those local-variable changes do not (directly) affect the local variables of the caller. In your original program, this applies to variables hFind, FindData, SearchName, and FullPath.
If you want similar behavior in a non-recursive version of the function then you need to manually preserve the state of your traversal of one level of the tree when you descend to another level. The goto statement doesn't do any such thing -- it just redirects the control flow of your program. Although there are a few good use cases for goto in C, they are uncommon, and yours is not one of them.
There are several ways to implement manually preserving state, but I would suggest
creating a structure type in which to store those data that characterize the state of your traversal of a particular level. Those appear to be only hFind and FindData -- it looks like the other locals don't need to be preserved. Maybe something like this, then:
struct dir_state {
HANDLE hFind;
WIN32_FIND_DATA FindData;
};
Dynamically allocating an array of structures of that type.
unsigned depth_limit = DEFAULT_DEPTH_LIMIT;
struct dir_state *traversal_states
= malloc(depth_limit * sizeof(*traversal_states));
if (traversal_states == NULL) // ... handle allocation error ...
Tracking the depth of your tree traversal, and for each directory you process, using the array element whose index is the relative depth of that directory.
// For example:
traversal_states[depth].hFind
= FindFirstFile(SearchName, &traversal_states[depth].FindData);
// etc.
Remembering the size of the array, so as to be able to reallocate it larger if the traversal descends too deep for its current size.
// For example:
if (depth >= depth_limit) {
depth_limit = depth_limit * 3 / 2;
struct dir_state *temp
= realloc(traversal_states, depth_limit * sizeof(*traversal_states));
if (temp == NULL) {
// handle error, discontinuing traversal
}
traversal_states = temp;
}
Also, use an ordinary for, while, or do loop instead of a backward-jumping goto. There will be a few details to work out to track when to use FindFirstFile and when FindNextFile (which you would still have with goto), but I'm sure you can sort it out.
Details are left as an exercise.
Unless necessary due to memory or processing constraints or infinite recursion tail conditions that would be complication to introduce there really isn't much need to not use recursion here, since it leads to a readable and elegant solution.
I also want to point out that in "modern" C, any solution using a GOTO is likely not a solution you want since they are so often confusing to use and leads to memory issues (we have loops now to make all of that so much simpler).
Instead of the GOTOs I would suggest implementing a stack of the directories. Wrap the printing logic a while or do-while, and as you are iterating over the files add any directories to the stack. At every new iteration pop and walk the directory at the head of the stack. The loop condition just needs to check if the directory stack is empty, before continuing its block.

Implementing an IObservable to compute digits of Pi

This is an academic exercise, I'm new to Reactive Extensions and trying to get my head around the technology. I set myself a goal of making an IObservable that returns successive digits of Pi (I happen to be really interested in Pi right at the moment for unrelated reasons). Reactive Extensions contains operators for making observables, the guidance they give is that you should "almost never need to create your own IObsevable". But I can't see how I can do this with the ready-made operators and methods. Let me elucidate a bit more.
I was planning to use an algorithm that would involve the expansion of a Taylor series for Arctan. To get the next digit of Pi, I'd expand a few more terms in the series.
So I need the series expansion going on asynchronously, occasionally throwing out the next computed digit to the IObserver. I obviosly don't want to restart the computation from scratch for each new digit.
Is there a way to implement this behaviour using RX's built-in operators, or am I going to have to code an IObservable from scratch? What strategy suggests itself?
For something like this, the simplest method would be to use a Subject. Subject is both an IObservable and IObserver, which sounds a bit strange but it allows you to use them like this:
class PiCalculator
{
private readonly Subject<int> resultStream = new Subject<int>();
public IObservable<int> ResultStream
{
get { return resultStream; }
}
public void Start()
{
// Whatever the algorithm actually is
for (int i = 0; i < 1000; i++)
{
resultStream.OnNext(i);
}
}
}
So inside your algorithm, you just call OnNext on the subject whenever you want to produce the next value.
Then to use it, you just need something like:
var piCalculator = new PiCalculator();
piCalculator.ResultStream.Subscribe(n => Console.WriteLine((n)));
piCalculator.Start();
Simplest way is to create an Enumerable and then convert it:
IEnumerable<int> Pi()
{
// algorithm here
for (int i = 0; i < 1000; i++)
{
yield return i;
}
}
Usage (for a cold observable, that is every new 'subscription' starts creating Pi from scratch):
var cold = Pi().ToObservable(Scheduler.ThreadPool);
cold.Take(5).Subscribe(Console.WriteLine);
If you want to make it hot (everyone shares the same underlying calculation), you can just do this:
var hot = cold.Publish().RefCount();
Which will start the calculation after the first subscriber, and stop it when they all disconnect. Here's a simple test:
hot.Subscribe(p => Console.WriteLine("hot1: " + p));
Thread.Sleep(5);
hot.Subscribe(p => Console.WriteLine("hot2: " + p));
Which should show hot1 printing only for a little while, then hot2 joining in after a short wait but printing the same numbers. If this was done with cold, the two subscriptions would each start from 0.

display records which exist in file2 but not in file1

log file1 contains records of customers(name,id,date) who visited yesterday
log file2 contains records of customers(name,id,date) who visited today
How would you display customers who visited yesterday but not today?
Constraint is: Don't use auxiliary data structure because file contains millions of records. [So, no hashes]
Is there a way to do this using Unix commands ??
an example, but check the man page of comm for the option you want.
comm -2 <(sort -u yesterday) <(sort -u today)
The other tool you can use is diff
diff <(sort -u yesterday) <(sort -u today)
I was personally going for the creating a data structure and records of visits, but, I can see how you'd do it another way too.
In pseudocode, that looks something like python but could be re-written in perl or shell script or ...
import subprocess
import os
for line in fileinput.input(['myfile'])::
# split out data. For the sake of it I'm assuming name\tid\tdate
fields = line.split("\")
id = fields[1]
grepresult = subprocess.Popen("grep \"" + id + "\" file1", shell=True, bufsize=bufsize, stdout=PIPE).stdout
if len(grepresult) == 0:
print fields # it wasn't in field1
That's not perfect, not tested so treat appropriately but it gives you the gist of how you'd use unix commands. That said, as sfussenegger points out C/C++ if that's what you're using should be able to handle pretty large files.
Disclaimer: this is a not so neat solution (repeatedly calling grep) to match the requirements of the question. If I was doing it, I would use C.
Is a customer identified by id? Is it an int or long? If the answer to both questions is yes, an array with 10,000,000 integers shouldn't take more than 10M*4 = 40MB memory - not a big deal on decent hardware. Simply sort and compare them.
btw, sorting an array with 10M random ints takes less than 2 seconds on my machine - again, nothing to be afraid of.
Here's some very simple Java code:
public static void main(final String args[]) throws Exception {
// elements in each log file
int count = 10000000;
// "read" our log file
Random r = new Random();
int[] a1 = new int[count];
int[] a2 = new int[count];
for (int i = 0; i < count; i++) {
a1[i] = Math.abs(r.nextInt());
a2[i] = Math.abs(r.nextInt());
}
// start timer
long start = System.currentTimeMillis();
// sort logs
Arrays.sort(a1);
Arrays.sort(a2);
// counters for each array
int i1 = 0, i2 = 0, i3 = 0;
// initial values
int n1 = a1[0], n2 = a2[0];
// result array
int[] a3 = new int[count];
try {
while (true) {
if (n1 == n2) {
// we found a match, save value if unique and increment counters
if (i3 == 0 || a3[i3-1] != n1) a3[i3++] = n1;
n1 = a1[i1++];
n2 = a2[i2++];
} else if (n1 < n2) {
// n1 is lower, increment counter (next value is higher)
n1 = a1[i1++];
} else {
// n2 is lower, increment counter (next value is higher)
n2 = a2[i2++];
}
}
} catch (ArrayIndexOutOfBoundsException e) {
// don't try this at home - it's not the pretties way to leave the loop!
}
// we found our results
System.out.println(i3 + " commont clients");
System.out.println((System.currentTimeMillis() - start) + "ms");
}
result
// sample output on my machine:
46308 commont clients
3643ms
as you see, quite efficient for 10M records in each log

Filehelpers ExcelStorage.ExtractRecords fails when first cell is empty

When the first cell of an excel sheet to import using ExcelStorage.ExtractRecords is empty, the process fail. Ie. If the data starts at col 1, row 2, if the cell (2,1) has an empty value, the method fails.
Does anybody know how to work-around this? I've tried adding a FieldNullValue attribute to the mapping class with no luck.
Here is a sample project that show the code with problems
Hope somebody can help me or point in some direction.
Thank you!
It looks like you have stumbled upon an issue in FileHelpers.
What is happening is that the ExcelStorage.ExtractRecords method uses an empty cell check to see if it has reached the end of the sheet. This can be seen in the ExcelStorage.cs source code:
while (CellAsString(cRow, mStartColumn) != String.Empty)
{
try
{
recordNumber++;
Notify(mNotifyHandler, mProgressMode, recordNumber, -1);
colValues = RowValues(cRow, mStartColumn, RecordFieldCount);
object record = ValuesToRecord(colValues);
res.Add(record);
}
catch (Exception ex)
{
// Code removed for this example
}
}
So if the start column of any row is empty then it assumes that the file is done.
Some options to get around this:
Don't put any empty cells in the first column position.
Don't use excel as your file format -- convert to CSV first.
See if you can get a patch from the developer or patch the source yourself.
The first two are workarounds (and not really good ones). The third option might be the best but what is the end of file condition? Probably an entire row that is empty would be a good enough check (but even that might not work in all cases all of the time).
Thanks to the help of Tuzo, I could figure out a way of working this around.
I added a method to ExcelStorage class to change the while end condition. Instead of looking at the first cell for empty value, I look at all cells in the current row to be empty. If that's the case, return false to the while. This is the change to the while part of ExtractRecords:
while (!IsEof(cRow, mStartColumn, RecordFieldCount))
instead of
while (CellAsString(cRow, mStartColumn) != String.Empty)
IsEof is a method to check the whole row to be empty:
private bool IsEof(int row, int startCol, int numberOfCols)
{
bool isEmpty = true;
string cellValue = string.Empty;
for (int i = startCol; i <= numberOfCols; i++)
{
cellValue = CellAsString(row, i);
if (cellValue != string.Empty)
{
isEmpty = false;
break;
}
}
return isEmpty;
}
Of course if the user leaves an empty row between two data rows the rows after that one will not be processed, but I think is a good thing to keep working on this.
Thanks
I needed to be able to skip blank lines, so I've added the following code to the FileHelpers library. I've taken Sebastian's IsEof code and renamed the method to IsRowEmpty and changed the loop in ExtractRecords from ...
while (CellAsString(cRow, mStartColumn) != String.Empty)
to ...
while (!IsRowEmpty(cRow, mStartColumn, RecordFieldCount) || !IsRowEmpty(cRow+1, mStartColumn, RecordFieldCount))
I then changed this ...
colValues = RowValues(cRow, mStartColumn, RecordFieldCount);
object record = ValuesToRecord(colValues);
res.Add(record);
to this ...
bool addRow = true;
if (Attribute.GetCustomAttribute(RecordType, typeof(IgnoreEmptyLinesAttribute)) != null && IsRowEmpty(cRow, mStartColumn, RecordFieldCount))
{
addRow = false;
}
if (addRow)
{
colValues = RowValues(cRow, mStartColumn, RecordFieldCount);
object record = ValuesToRecord(colValues);
res.Add(record);
}
What this gives me is the ability to skip single empty rows. The file will be read until two successive empty rows are found

How to Combine Two GUID Values

I want to combine two guid values and generate a 32 bit alphanumberic value(It can be done by using hashing).
Not Pretty, but it works..
private static Guid MungeTwoGuids(Guid guid1, Guid guid2)
{
const int BYTECOUNT = 16;
byte[] destByte = new byte[BYTECOUNT];
byte[] guid1Byte = guid1.ToByteArray();
byte[] guid2Byte = guid2.ToByteArray();
for (int i = 0; i < BYTECOUNT; i++)
{
destByte[i] = (byte) (guid1Byte[i] ^ guid2Byte[i]);
}
return new Guid(destByte);
}
and yes, I can deal with the non-unique-guarantee in my case
What about splitting the Guids into 2 chunks of 8 bytes each, convert them to ulong (8 bytes), XOR combine them and then concat the 2 results.
public static Guid Combine(this Guid x, Guid y)
{
byte[] a = x.ToByteArray();
byte[] b = y.ToByteArray();
return new Guid(BitConverter.GetBytes(BitConverter.ToUInt64(a, 0) ^ BitConverter.ToUInt64(b, 8))
.Concat(BitConverter.GetBytes(BitConverter.ToUInt64(a, 8) ^ BitConverter.ToUInt64(b, 0))).ToArray());
}
You can't convert 2 128-bit GUIDs into a 16-bit or 32-bit value and maintain uniqueness. For your stated application (use value in URL) this doesn't seem to make sense, as a given value in the URL could map to any number of GUID combinations. Have you considered this?
The best approach would be to use an URL-shortening lookup where you generate a unique ID and map it to the GUIDs if needed - similarly to bit.ly or tinyurl.com.
var a = Guid.NewGuid();
var b = Guid.NewGuid();
var hashOfXor = Xor(a, b).GetHashCode();
public static Guid Xor(Guid a, Guid b)
{
unsafe
{
Int64* ap = (Int64*) &a;
Int64* bp = (Int64*) &b;
ap[0] ^= bp[0];
ap[1] ^= bp[1];
return *(Guid*) ap;
}
}
I actually did have a need to merge two Guids together to create a third Guid.
Where the third Guid (not necessarily unique) would be the same regardless of the order the two original Guids were supplied.
So I came up with this:
public static Guid Merge(Guid guidA, Guid guidB)
{
var aba = guidA.ToByteArray();
var bba = guidB.ToByteArray();
var cba = new byte[aba.Length];
for (var ix = 0; ix < cba.Length; ix++)
{
cba[ix] = (byte)(aba[ix] ^ bba[ix]);
}
return new Guid(cba);
}
Assuming you want to generate a 32 byte value you can just concatenate the GUIDs since they are 16 byte each. If you really need a 32 bit value the only solution I see is generating your own 32 bit values and storing the related GUIDs in a database so you can retrieve them later.
In .NET Core 3 we can use Sse2/Span<T> to speed things up, and avoid all allocations. Essentially this code treats a Guid as 2 consecutive Int64 values, and performs the xor on them. SSE2 performs the xor in a single processor instruction (SIMD).
public static Guid Xor(this Guid a, Guid b)
{
if (Sse2.IsSupported)
{
var result = Sse2.Xor(Unsafe.As<Guid, Vector128<long>>(ref a), Unsafe.As<Guid, Vector128<long>>(ref b));
return Unsafe.As<Vector128<long>, Guid>(ref result);
}
var spanA = MemoryMarshal.CreateSpan(ref Unsafe.As<Guid, long>(ref a), 2);
var spanB = MemoryMarshal.CreateSpan(ref Unsafe.As<Guid, long>(ref b), 2);
spanB[0] ^= spanA[0];
spanB[1] ^= spanA[1];
return b;
}
Depends on the platform and details of what you are trying to do.
In .NET/C# you could jus take avery simple approach:
var result = g1.GetHashCode() ^ g2.GetHashCode();
I would use an UUID5 (name-based) to combine two GUIDs, see https://stackoverflow.com/a/5657517/7556646
Guid g1 = new Guid("6164742b-e171-471b-ad6f-f98a78c5557e");
Guid g2 = new Guid("acbc41aa-971c-422a-bd42-bbcefa32ffb4");
Guid g12 = Create(IsoOidNamespace, g1.ToString() + g2.ToString(), 5)
In this example g12 would be: e1ccaee5-ea5e-55c6-89a5-fac02043326e.
There's no native support in the .NET Framework for creating these, but the code is posted on GitHub that implements the algorithm.
See as well the following .NET Fiddle, https://dotnetfiddle.net/VgHLtz
Why not try a simple operator i.e. AND, OR, XOR etc. To combine the two. XOR would be your best bet hear I would imagine as it has the nice property of when xoring the result with either of the two inputs you will get the other.
Edit: having just looked at this solution, there is a problem with it. The values would have to be normalised. Take a look at Vinay's Answer for a better solution.
Here's a one-liner for you:
g1.ToByteArray().Concat(g2.ToByteArray()).GetHashCode()
public static string Merge(Guid one, Guid two)
{
return new List<Guid>() { one, two }
.OrderBy(x => x.GetHashCode())
.Select(y => y.ToString().ToLowerInvariant())
.Aggregate((a, b) => ${a.ToLowerInvariant()}_{b.ToLowerInvariant()}");
}
So in my situation i needed to maintain order in order to make sure that the 2 Guids could be merged regardless of order. Therefore they have to be ordered. That was step one. Then, it's simply selecting the guids to string and for consitency (super important), I used string.ToLowerInvariant(). Then concatenated them using the .Aggregate function.

Resources