I have an IP address 10.0.2.0
The next IP after a block of 64 (10.0.2.0 to 10.0.2.63) is 10.0.2.64
After that (10.0.2.64 to 10.0.2.127 ) 10.0.2.128 etc
How do I calculate the nth?
I had assumed roughly
a = (n*64) mod 256
b = 255/n
10.0.2+b.a
Here's the final solution (in JavaScript):
function incrementIp(ip,nips){
var input = ip.split(".");
var ip = (input[0] << 24) | (input[1] << 16) | (input[2] << 8) | (input[3] << 0);
ip+=nips;
return (ip>>24 & 0xff )+ "." + (ip>>16 &0xff) + "." +( ip>>8 &0xff) + "." + (ip & 0xff);
}
Related
I'm trying to traverse the graph with an interval. Each edge has a property("interval") storing the intervals. I'm using withSack to propagate the intersections of the intervals to the next step. If there's no intersection the traversal should stop.
For example:
V1 e1 V2 e2 V3 e3 V4
O----------------------O----------------------O----------------------O
^
[[1,3],[5,7]] [[4,6]] [[7,9]]
e1.interval e2.interval e3.interval
If I start traversal from V1 with interval [2,8], I want it to return
V1: [[2,3],[5,7]]
V2: [[5,6]]
Notice that V3 and V4 is not included since the intersected interval on e2 stops at 6.
I'm using Tinkerpop Java API and for this purpose, I defined a method that returns the intersections of the intervals and tried to use withSack(Lambda.biFunction(...)). The function has while loop with curly braces({}) and I think it causes a problem on the gremlin server's script engine. The exception I'm getting is this:
Script28.groovy: 1: expecting '}', found 'return' # line 1, column 520.
get(j).get(1)) i++; else j++;}return int
I'm passing the function as a string to (Lambda.biFunction(...)) like this:
"x, y -> " +
"List<List<Long>> intersections = new ArrayList();" +
"if (x.isEmpty() || y.isEmpty()) return new ArrayList<>();" +
"int i = 0, j = 0;" +
"while (i < x.size() && j < y.size()) {" +
"long low = Math.max(x.get(i).get(0), y.get(j).get(0));" +
"long high = Math.min(x.get(i).get(1), y.get(j).get(1));" +
"if (low <= high) intersections.add(Arrays.asList(low, high));" +
"if (x.get(i).get(1) < y.get(j).get(1)) i++; else j++;" +
"}" +
"return intersections;";
For the readability I'm also putting the original function:
public List<List<Long>> intersections(List<List<Long>> x, List<List<Long>> y) {
List<List<Long>> intersections = new ArrayList();
if (x.isEmpty() || y.isEmpty()) {
return new ArrayList<>();
}
int i = 0, j = 0;
while (i < x.size() && j < y.size()) {
long low = Math.max(x.get(i).get(0), y.get(j).get(0));
long high = Math.min(x.get(i).get(1), y.get(j).get(1));
if (low <= high) {
intersections.add(Arrays.asList(low, high));
}
if (x.get(i).get(1) < y.get(j).get(1)) {
i++;
} else {
j++;
}
}
return intersections;
}
I have 2 questions:
How to pass a complex lambda function like this to gremlin server?
Is there a better way to accomplish this?
The string of your lambda needs to match a Groovy closure form. For your multiline and multi-argument script you need to wrap some curly braces around it:
withSack(Lambda.biFunction(
"{ x, y -> " +
" intersections = []\n" +
" if (x.isEmpty() || y.isEmpty()) return []\n" +
" i = 0\n" +
" j = 0\n" +
" while (i < x.size() && j < y.size()) {\n" +
" def low = Math.max(x[i][0], y[j][0])\n" +
" def high = Math.min(x[i][1], y[j][1])\n" +
" if (low <= high) intersections.add(Arrays.asList(low, high))\n" +
" if (x[i][1] < y[j][1]) i++; else j++\n" +
" }\n" +
" return intersections\n" +
"}"))
I also converted your Java to Groovy (hopefully correctly) which ends up being a little more succinct but that part should be unnecessary.
I want to use two BulkSendApplications to send from node_0 to node_1 and from node_2 to node_3.
My code for creating the applications looks the following:
// node_0 to node_1
uint16_t rcv_port = 50000;
PacketSinkHelper sink1 ("ns3::TcpSocketFactory",
Address (InetSocketAddress (Ipv4Address::GetAny (), rcv_port)));
ApplicationContainer sinkApps1 = sink1.Install (node_1);
sinkApps1.Start (MilliSeconds (0));
sinkApps1.Stop (MilliSeconds (start_sending_traffic + traffic_duration));
BulkSendHelper source1 (
"ns3::TcpSocketFactory",
Address (InetSocketAddress (node_1, rcv_port)));
ApplicationContainer sourceApps1 = source1.Install (node_0);
source.SetAttribute ("MaxBytes", UintegerValue (0));
sourceApps1.Start (MilliSeconds (start_sending_traffic));
sourceApps1.Stop (MilliSeconds (start_sending_traffic + traffic_duration));
// node_2 to node_3
uint16_t rcv_port2 = 50001;
PacketSinkHelper sink2 ("ns3::TcpSocketFactory",
Address (InetSocketAddress (Ipv4Address::GetAny (), rcv_port2)));
ApplicationContainer sinkApps2 = sink2.Install (node_3);
sinkApps2.Start (MilliSeconds (0));
sinkApps2.Stop (MilliSeconds (start_sending_traffic + traffic_duration));
BulkSendHelper source2 (
"ns3::TcpSocketFactory",
Address (InetSocketAddress (node_3, rcv_port2)));
ApplicationContainer sourceApps2 = source2.Install (node_2);
source2.SetAttribute ("MaxBytes", UintegerValue (0));
sourceApps2.Start (MilliSeconds (start_sending_traffic));
sourceApps2.Stop (MilliSeconds (start_sending_traffic + traffic_duration));
The code is the same for both, just with the correct nodes. If I run this code, node_0->node_1 is transmitting the correct amount of traffic while node_2->node_3 is not transmitting anything.
I am checking the amount of bits received with the following code snippet:
Ptr<PacketSink> s1 = DynamicCast<PacketSink> (sinkApps1.Get (0));
std::cout << "Total Bits Received: " << s1->GetTotalRx () * 8 << std::endl;
Ptr<PacketSink> s2 = DynamicCast<PacketSink> (sinkApps2.Get (0));
std::cout << "Total Bits Received: " << s2->GetTotalRx () * 8 << std::endl;
Does anyone know what is going wrong here? Am I using the constructors the wrong way?
I wrote a simple code where I add hexadecimal values multiplied by 0x1, 0x100 and so on together.
uid = (nuidPICC[0] * 0x1000000);
uid = uid + (nuidPICC[1] * 0x10000);
uid = uid + (nuidPICC[2] * 0x100);
uid = uid + nuidPICC[3];
when I pass numbers D1,55,BF,2D result is D154BF2D but on some numbers combination it is working well, I am using Arduino IDE 1.8.5, can you explain?
The code you have is working perfectly, so there's rather little to explain, if you assume that it is doing what you want it to do. If it's not doing what you want it to do, then we can guess what you want it to do, but then it could be that you wanted it to make you a nice cup of tea, and in that case this won't be much help.
Since you don't give the definitions of the variables, I will assume you have something like:
int8_t nuidPICC[] = { 0xD1,0x55,0xBF,0x2D };
printf("nuidPICC = { %d, %d, %d, %d }\n", nuidPICC[0], nuidPICC[1], nuidPICC[2], nuidPICC[3]);
int32_t uid = (nuidPICC[0] * 0x1000000);
uid = uid + (nuidPICC[1] * 0x10000);
uid = uid + (nuidPICC[2] * 0x100);
uid = uid + nuidPICC[3];
printf("uid = %d * %d + %d * %d + %d * %d + %d = %d\n",
nuidPICC[0], 0x1000000,
nuidPICC[1], 0x10000,
nuidPICC[2], 0x100,
nuidPICC[3], uid);
printf("%d in hex is %08x\n",uid,uid);
which outputs
nuidPICC = { -47, 85, -65, 45 }
uid = -47 * 16777216 + 85 * 65536 + -65 * 256 + 45 = -782975187
-782975187 in hex is d154bf2d
And you can verify that it does exactly what you asked it to.
However, given the values you're multiplying by, you seem to be trying to make a mask from four signed bytes.
Multiplying a int8_t by an integer literal causes it to be extended to an int, and so
int32_t x = int8_t(0xbf) * 0x100;
printf("0xbf * 0x100 = %d or 0x%08x\n",x,x);
0xbf * 0x100 = -16640 or 0xffffbf00
Those leading 0xffff are called 'sign extension' and are causing the next higher byte value to differ from what you would get if you were just shifting and combining the bits.
If you want to combine signed bytes, you need to mask off the sign extension:
uid = (nuidPICC[0] << 24);
uid = uid | (nuidPICC[1] << 16) & 0xff0000;
uid = uid | (nuidPICC[2] << 8) & 0xff00;
uid = uid | nuidPICC[3] & 0xff;
or
uid = ( 0xffffffd1 << 24)
| ( ( 0x00000055 << 16 ) & 0xff0000 )
| ( ( 0xffffffbf << 8 ) & 0xff00 )
| ( 0x0000002d & 0xff )
= 0xd155bf2d
but usually it's easier to use unsigned bytes for bit masks as they don't have sign extension:
uint8_t nuidPICC[] = { 0xD1,0x55,0xBF,0x2D };
uint32_t uid = (nuidPICC[0] << 24);
uid = uid | (nuidPICC[1] << 16);
uid = uid | (nuidPICC[2] << 8);
uid = uid | nuidPICC[3];
printf("uid = ( 0x%x << %d) | ( 0x%x << %d ) | ( 0x%x << %d ) | 0x%x = 0x%x\n",
nuidPICC[0], 24,
nuidPICC[1], 16,
nuidPICC[2], 8,
nuidPICC[3], uid);
uid = ( 0x000000d1 << 24)
| ( 0x00000055 << 16 )
| ( 0x000000bf << 8 )
| 0x0000002d
= 0xd155bf2d
I have a video conference software that works with h264 multicast streamings, now I need to make this software works with an ip camera that provides an RTPS control interface.
I requested the h264 stream and it's coming via udp multicast, the packages are in RTP format.
So, acording with some research I need to remove the RTP header from the udp payload to have my wanted data, and I need to reconstruct the I frames becouse they may be fragmented.
I'm using Qt and the class QUdpSocket
QByteArray IDR;
while(socket->hasPendingDatagrams())
{
int pendingDataSize = socket->pendingDatagramSize();
char * data = (char *) malloc(pendingDataSize);
socket->readDatagram(data, pendingDataSize);
int fragment_type = data[12] & 0x1F;
int nal_type = data[13] & 0x1F;
int start_bit = data[13] & 0x80;
int end_bit = data[13] & 0x40;
//If it is an I Frame
if (((fragment_type == 28) || (fragment_type == 29)) && (nal_type == 5))
{
if(start_bit == 128 && end_bit == 64)
{
char nalByte = (data[12] & 0xE0) | (data[13] & 0x1F);
data[13] = nalByte;
char * dataWithoutHeader = data + 13;
uint8_t* datagramToQueue = (uint8_t*) queue_malloc(pendingDataSize - 13);
memcpy(datagramToQueue, dataWithoutHeader, pendingDataSize - 13);
f << "\nI Begin + I End\n";
}
if(start_bit == 128)
{
f << "\nI Begin\n";
char nalByte = (data[12] & 0xE0) | (data[13] & 0x1F);
data[13] = nalByte;
char * dataWithoutHeader = data + 13;
IDR.append(dataWithoutHeader, pendingDataSize - 13);
}
if(end_bit == 64)
{
f << "\nI End\n";
char* dataWithoutHeader = data + 13;
IDR.append(dataWithoutHeader, pendingDataSize - 13);
datagramToQueue = (uint8_t*) queue_malloc(IDR.size());
memcpy(datagramToQueue, IDR.data(), IDR.size());
queue_enqueue(this->encodedQueue, datagramToQueue, IDR.size(),0, NULL);
IDR.clear();
}
if(start_bit != 128 && end_bit != 64)
{
char* dataWithoutHeader = data + 13;
IDR.append(dataWithoutHeader, pendingDataSize - 13);
}
f << "\nI\n";
continue;
}
f << "\nP\n";
uint8_t* datagramToQueue = (uint8_t*) queue_malloc(pendingDataSize - 12);
char* datacharMid = data + 13;
memcpy(datagramToQueue, datacharMid, pendingDataSize - 12);
queue_enqueue(this->encodedQueue, datagramToQueue, pendingDataSize - 12,0, NULL);
}
To decode the stream, My software have two implementations, the first uses FFMPEG and the second Intel Media SDK.
The both identifies the parametes of the video, FFMPEG shows as result a little strip of video where I can define some stuff that is in front of the camera and the rest of image is an entire mass with the colors of my scenario. The Intel Media SDK results in a pink screen with some gray lines moving around.
So, someone can tell me if there is some mistake in my packages parser? The order fragment_type, start_bit, end_bit are coming just don't make much sense to me.
I'm very interested in cryptography, and since I like programming too, I decided to make a little program to encrypt files using XTEA encryption algorithm.
I got inspired from Wikipedia, and so I wrote this function to do the encryption (To save space, I won't post the deciphering function, as it is almost the same):
void encipher(long *v, long *k)
{
long v0 = v[0], v1 = v[1];
long sum = 0;
long delta = 0x9e3779b9;
short rounds = 32;
for(uint32 i = 0; i<rounds; i++)
{
v0 += (((v1 << 4) ^ (v1 >> 5)) + v1) ^ (sum + k[sum & 3]);
sum += delta;
v1 += (((v0 << 4) ^ (v0 >> 5)) + v0) ^ (sum + k[(sum>>11) & 3]);
}
v[0] = v1;
v[1] = v1;
}
Now when I want to use it, I wrote this code:
long data[2]; // v0 and v1, 64bits
data[0] = 1;
data[1] = 1;
long key[4]; // 4 * 4 bytes = 16bytes = 128bits
*key = 123; // sets the key
cout << "READ: \t\t" << data[0] << endl << "\t\t" << data[1] << endl;
encipher(data, key);
cout << "ENCIPHERED: \t" << data[0] << endl << "\t\t" << data[1] << endl;
decipher(data, key);
cout << "DECIPHERED: \t" << data[0] << endl << "\t\t" << data[1] << endl;
I always get either run-time crash or wrong decipher text:
I do understand the basics of the program, but I don't really know what is wrong with my code. Why is the enciphered data[0] and data1 the same? And why is deciphered data completely different from the starting data? Am I using the types wrong?
I hope you can help me solving my problem :) .
Jan
The problem is here:
v[0] = v1; // should be v[0] = v0
v[1] = v1;
Also, you only set the first 4 bytes of the key. The remaining 12 bytes are uninitialized.
Try something like this:
key[0] = 0x12345678;
key[1] = 0x90ABCDEF;
key[2] = 0xFEDCBA09;
key[3] = 0x87654321;
The fixed code gives me this output:
READ: 1
1
ENCIPHERED: -303182565
-1255815002
DECIPHERED: 1
1