What control MPI_Barrier time to execute - mpi

This code:
#include <mpi.h>
int main(int argc, char* argv[])
{
MPI_Init(&argc, &argv);
for (unsigned int iter = 0 ; iter < 1000 ; iter++)
MPI_Barrier(MPI_COMM_WORLD);
MPI_Finalize();
return 0;
}
is very long to run with MPICH 3.1.4. Here are the wall clock (in seconds) for different MPI implementations.
On a laptop with 4 processors of 2 cpu cores:
| MPI size | MPICH 1.4.1p1 | openmpi 1.8.4 | MPICH 3.1.4 |
|----------|---------------|---------------|-------------|
| 2 | 0.01 | 0.39 | 0.01 |
| 4 | 0.02 | 0.39 | 0.01 |
| 8 | 0.14 | 0.45 | 27.28 |
| 16 | 0.34 | 0.53 | 71.56 |
On a desktop with 8 processors of 4 cpu cores:
| MPI size | MPICH 1.4.1p1 | openmpi 1.8.4 | MPICH 3.1.4 |
|----------|---------------|---------------|-------------|
| 2 | 0.00 | 0.41 | 0.00 |
| 4 | 0.01 | 0.41 | 0.01 |
| 8 | 0.07 | 0.45 | 2.57 |
| 16 | 0.36 | 0.54 | 61.76 |
What explain such a difference, and how to control it?

You are using MPI size > number of processors available. As MPI programs spawn in such a way that each process is handled by a single processor, what this means is that, for example when you run MPI size == 16 on your 8 core machine, each processor will be responsible for two processes; this will not make the program any faster, and, in fact, will make it slower as you have seen. The way to get around it is to either get a machine with more processors available, or to ensure that you run your code with MPI size <= number of processors available.

Related

Get memory, cpu and disk usage for each tenant in Openstack

I am looking for the CPU, Memory and Disk consumption for each Tenant in Openstack,and their relationship by users, instances, flavors in use. Horizon only shows utilization of memory, cpu of a global way. Is it possible to get it with Openstack commands?
My openstack is based on Rocky.
Any ideas will be really appreciated
The only thing I know is
openstack limits show --absolute --project <Project_ID/Tenant_ID>
see also https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/limits.html
In the output you have information like for example totalCoresUsed, which represents the number of cores, which are used by the selected project.
Example:
root#openstack-controller:~# openstack limits show --absolute --project 416f937f505f4ff6b623c48a61228a86
+--------------------------+-------+
| Name | Value |
+--------------------------+-------+
| maxTotalInstances | 10 |
| maxTotalCores | 20 |
| maxTotalRAMSize | 51200 |
| maxSecurityGroups | 10 |
| maxTotalFloatingIps | 10 |
| maxServerMeta | 128 |
| maxImageMeta | 128 |
| maxPersonality | 5 |
| maxPersonalitySize | 10240 |
| maxSecurityGroupRules | 20 |
| maxTotalKeypairs | 100 |
| maxServerGroups | 10 |
| maxServerGroupMembers | 10 |
| totalRAMUsed | 2560 |
| totalCoresUsed | 7 |
| totalInstancesUsed | 7 |
| totalFloatingIpsUsed | 0 |
| totalSecurityGroupsUsed | 1 |
| totalServerGroupsUsed | 0 |
| maxTotalVolumes | 10 |
| maxTotalSnapshots | 10 |
| maxTotalVolumeGigabytes | 1000 |
| maxTotalBackups | 10 |
| maxTotalBackupGigabytes | 1000 |
| totalVolumesUsed | 5 |
| totalGigabytesUsed | 7 |
| totalSnapshotsUsed | 0 |
| totalBackupsUsed | 0 |
| totalBackupGigabytesUsed | 0 |
+--------------------------+-------+
The quotas and so the limitations are bind to projects and not to users, so I don't know if it is possible to get a relationshit by users. The only idea I would have, would a simple bash-script, which iterates over all instances and volumes of a project and collect the information of each ressource by the user, who created it.
Update 30.7.2020:
Found a better solution now, which also allows to get the resource usage per user of a project. It comes with the new placement-component with the stein-release of openstack (tested in train-release of openstack).
Installation of the openstack-client extension: pip install osc-placement
Ressource-usage of a project:
openstack resource usage show --os-placement-api-version 1.9 <PROJECT_ID>
Ressource-usage of a specific user within a project:
openstack resource usage show --os-placement-api-version 1.9 --user-id <USER_ID> <PROJECT_ID>
Example:
openstack resource usage show --os-placement-api-version 1.9 --user-id 98378bd3cdd94218bf7b6ef4ec80e74a 7733616a513444c2a106243db318b0dd
+----------------+-------+
| resource_class | usage |
+----------------+-------+
| VCPU | 3 |
| MEMORY_MB | 768 |
| DISK_GB | 9 |
+----------------+-------+

How to evaluate BOOTP chaddr by mac address?

In dhcp packet, a field means client hardware address, but it's not the same as mac address like "fa:16:3e:6f:1a:9d".
If I have known a interface's mac address "fa:16:3e:6f:1a:9d", how to evaluate chaddr by mac address?
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| op (1) | htype (1) | hlen (1) | hops (1) |
+---------------+---------------+---------------+---------------+
| xid (4) |
+-------------------------------+-------------------------------+
| secs (2) | flags (2) |
+-------------------------------+-------------------------------+
| ciaddr (4) |
+---------------------------------------------------------------+
| yiaddr (4) |
+---------------------------------------------------------------+
| siaddr (4) |
+---------------------------------------------------------------+
| giaddr (4) |
+---------------------------------------------------------------+
| |
| chaddr (16) |
| |
| |
+---------------------------------------------------------------+
| |
| sname (64) |
+---------------------------------------------------------------+
| |
| file (128) |
+---------------------------------------------------------------+
| |
| options (variable) |
+---------------------------------------------------------------+
See https://www.ietf.org/rfc/rfc2131.txt
4.4.1 Initialization and allocation of network address
...
The client MUST include its hardware address in the 'chaddr'
field, if necessary for delivery of DHCP reply messages.
First six bytes contain the hardware address, the rest are zeros. One can inspect the contents of bootp/dhcp packets for example in Linux with dhcpdump.

How to decode the application extension block of GIF?

How to decode the application extension block of GIF?
0000300: 73e7 d639 bdad 10ad 9c08 b5a5 0021 ff0b s..9.........!..
0000310: 4e45 5453 4341 5045 322e 3003 0100 0000 NETSCAPE2.0.....
0000320: 21f9 0409 1900 f600 2c00 0000 0016 01b7 !.......,.......
this "
21 ff0b s..9.........!..
0000310: 4e45 5453 4341 5045 322e 30
" is known, but what is "03 0100 0000"?
The following describes GIF Netscape Application extension, taken from here.
The block is 19 bytes long. First 14 bytes belongs to general
Application Extension format, syntax is described in GIF89a
Specification, section "26. Application Extension".
Syntax
0 | 0x21 | Extension Label
+---------------+
1 | 0xFF | Application Extension Label
+---------------+
2 | 0x0B | Block Size
+---------------+
3 | |
+- -+
4 | |
+- -+
5 | |
+- -+
6 | |
+- NETSCAPE -+ Application Identifier (8 bytes)
7 | |
+- -+
8 | |
+- -+
9 | |
+- -+
10 | |
+---------------+
11 | |
+- -+
12 | 2.0 | Application Authentication Code (3 bytes)
+- -+
13 | |
+===============+ --+
14 | 0x03 | Sub-block Data Size |
+---------------+ |
15 | 0x01 | Sub-block ID |
+---------------+ | Application Data Sub-block
16 | | |
+- -+ Loop Count (2 bytes) |
17 | | |
+===============+ --+
18 | 0x00 | Block Terminator
You already know the data up to NETSCAPE2.0. The next byte 0x03 tells us the next data sub-block length which is always 3 bytes. The following 0x01 is the sub-block ID. For Netscape block, there is only one data sub-block and the ID is 1.
The following 2 bytes specifies the loop count in little endian — how many times the image frames should be looped, which is 0, and 0 means loop forever.
The last byte 0x00 is used to terminate the data block. So when we meet a 0x00 where data sub-block length should be, we know there are no sub-blocks left and we need to stop reading the block.

Is there a way to show partitions on Cloudera impala?

Normally, I can do show partitions <table> in hive. But when it is a parquet table, hive does not understand it. I can go to hdfs and check the dir structure, but that is not ideal. Is there any better way to do that?
I am using Impala 1.4.0 and I can see partitions.
From the impala-shell give the command:
show partitions <mytablename>
I have something looking like this:
+-------+-------+-----+-------+--------+---------+--------------+---------+
| year | month | day | #Rows | #Files | Size | Bytes Cached | Format |
+-------+-------+-----+-------+--------+---------+--------------+---------+
| 2013 | 11 | 1 | -1 | 3 | 25.87MB | NOT CACHED | PARQUET |
| 2013 | 11 | 2 | -1 | 3 | 24.84MB | NOT CACHED | PARQUET |
| 2013 | 11 | 3 | -1 | 2 | 19.05MB | NOT CACHED | PARQUET |
| 2013 | 11 | 4 | -1 | 3 | 23.63MB | NOT CACHED | PARQUET |
| 2013 | 11 | 5 | -1 | 3 | 26.56MB | NOT CACHED | PARQUET |
Alternatively you can go to your table in HDFS . They are normally seen in this path:
/user/hivestore/warehouse/<mytablename> or
/user/hive/warehouse/<mytablename>
Unfortunately no. Issue is open though. So checking it manually seems to be the only option right now.

Hexadecimal Multiplication

Is there any shortcut or a some quick way of multiplying 2 small Hexadecimal numbers apart from converting into decimal ? Like in pen and paper method
Thanks,
Kiran
When you learn to multiply in base 10, you're taught to memorize multiplication-tables. The base 10 table is as follows:
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
--+---+---+---+---+---+---+---+---
2 | 4 | 6 | 8 |10 |12 |14 |16 |18
--+---+---+---+---+---+---+---+---
3 | 6 | 9 |12 |15 |18 |21 |24 |27
--+---+---+---+---+---+---+---+---
etc...
When you're multiplying in other bases, you perform the same shortcuts, using a different multiplication-table (base 16):
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | A | B | C | D | E | F
--+---+---+---+---+---+---+---+---+---+---+---+---+---+---
2 | 4 | 6 | 8 | A | C | E |10 |12 |14 |16 |18 |1A |1C |1E
--+---+---+---+---+---+---+---+---+---+---+---+---+---+---
3 | 6 | 9 | C | F |12 |15 |18 |1B |1E |21 |24 |27 |2A |2D
etc...
Long hand binary math is done the same way as longhand decimal for adding just carry the 2.
1010110 x 101
Add these numbers
1010110 ones column
00000000 tens column (or 2s column)
101011000 100s column (or 4s column)
=========
110101110
You didn't mention a platform/language/ect.
EDIT: OP clarified "pen and paper" after I wrote this.
Windows calculator has hex, octal, and binary modes.
But ultimately, numbers in a computer are base 2. Tools/languages which support decimal, hexidecimal, etc. are doing so for the convenience of the ape sitting at the keyboard, but in the computer's memory the number ends up being base 2.
For instance, in C the following two statements are the same (after lexing):
int x = 0xf * 0xf0; // hexidecimal
int x = 017 * 0360; // octal
int x = 15 * 240; // decimal
The different notations are for the convenience of the programmer, but in the machine these numbers are all represented the same way.
Using linux? You can use dc to do hex math. Set the input and output radix to 16 and you good to go.

Resources