intel_pt data cannot be imported properly into Intel VTune 2018 - intel

I am using Intel VTune 2018 to profile and derive the control flow dependencies by making use of the Intel_PT PMU under the system:
Kernel: 4.15.0-13-generic, 64bit Ubuntu
CPU: Intel® Core™ i7-7820X # 3.60GHz × 16
I started with the following commands:
1- amplxe-perf record -o a.perf -T -e intel_pt// -- ps
PID TTY TIME CMD
21471 pts/1 00:00:00 amplxe-perf
21472 pts/1 00:00:00 ps
58693 pts/1 00:00:00 sudo
58694 pts/1 00:00:00 su
58695 pts/1 00:00:00 bash
[ perf record: Woken up 2 times to write data ]
[ perf record: Captured and wrote 3.154 MB a.perf ]
2- amplxe-cl -import a.perf -r folder
amplxe: Importing a new result 100 % done
amplxe: Using result path `/home/amad/May2/folder'
amplxe: Executing actions 12 % Loading 'a.perf' file
amplxe: Error: Cannot load data file `/home/amad/May2/folder/data.0/a.perf' (Data file is corrupted).
amplxe: Executing actions 50 % done
amplxe: Error: 0x4000001e (Cannot load raw collector data)
Although intel_pt data has not been successfully imported, the data for other kernel PMU events like "cpu-cycles" and "instructions" could be properly handled:
1- amplxe-perf record -o p.perf -T -e cpu-cycles,instructions -- ps
PID TTY TIME CMD
8410 pts/0 00:00:00 sudo
8458 pts/0 00:00:00 amplxe-perf
8467 pts/0 00:00:00 ps
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.024 MB p.perf (96 samples) ]
2- amplxe-cl -import p.perf -r r2
amplxe: Importing a new result 100 % done
amplxe: Using result path `/home/amad/r2'
amplxe: Executing actions 19 % Resolving information for `libprocps.so.6.0.0'
amplxe: Warning: Cannot locate debugging information for file `/lib/x86_64-linux-gnu/libprocps.so.6.0.0'.
amplxe: Executing actions 21 % Resolving information for `vmlinux'
amplxe: Warning: Cannot locate debugging information for the Linux kernel. Source-level analysis will not be possible. Function-level analysis will be limited to kernel symbol tables. See the Enabling Linux Kernel Analysis topic in the product online help for instructions.
amplxe: Executing actions 75 % Generating a report
Collection and Platform Info
----------------------------
Parameter r2
---------------- ------------------------------------
Operating System 4.15.0-13-generic
Computer Name amad-pc
Result Size 2766877
Collector Type Driverless Perf per-process sampling
CPU
---
Parameter r2
----------------- ----------
Frequency 3600000000
Logical CPU Count 16
Summary
-------
Elapsed Time: 0.011
Paused Time: 0.0
CPU Time: 0.011
Average CPU Utilization: 0.897
Event summary
-------------
Hardware Event Type Hardware Event Count:Self Hardware Event Sample Count:Self Events Per Sample
------------------- ------------------------- -------------------------------- -----------------
cpu-cycles 40521584 45 4000
instructions 36302909 51 4000
amplxe: Executing actions 100 % done
What is wrong with Intel_pt data?

Related

mpirun running job serially with only one core

I have installed mpich4.1 in ubuntu machine using GNU compiler. In the beginning I ran one job successfully using mpirun on '36' cores, but now when I'm trying to run same job it's running serially using only one core. Now the command output of mpirun -np 36 ./wrf.exe is
starting wrf task 0 of 1
starting wrf task 0 of 1
starting wrf task 0 of 1
starting wrf task 0 of 1
starting wrf task 0 of 1
starting wrf task 0 of 1
The mpivars gives error with
Abort(470406415): Fatal error in internal_Init_thread: Other MPI error, error stack:
internal_Init_thread(67): MPI_Init_thread(argc=0x7fff8044f34c, argv=0x7fff8044f340, required=0, provided=0x7fff8044f350) failed
MPII_Init_thread(222)...: gpu_init failed
But the machine is not having GPU.
The mpi version command gives
HYDRA build details:
Version: 4.1
Release Date: Fri Jan 27 13:54:44 CST 2023
CC: gcc
Configure options: '--disable-option-checking' '--prefix=/home/MODULES' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'CFLAGS= -O2' 'LDFLAGS=' 'LIBS=' 'CPPFLAGS= -DNETMOD_INLINE=__netmod_inline_ofi__ -I/home/MODULES/mpich-4.1/src/mpl/include -I/home/MODULES/mpich-4.1/modules/json-c -D_REENTRANT -I/home/MODULES/mpich-4.1/src/mpi/romio/include -I/home/MODULES/mpich-4.1/src/pmi/include -I/home/MODULES/mpich-4.1/modules/yaksa/src/frontend/include -I/home/MODULES/mpich-4.1/modules/libfabric/include'
Process Manager: pmi
Launchers available: ssh rsh fork slurm ll lsf sge manual persist
Topology libraries available: hwloc
Resource management kernels available: user slurm ll lsf sge pbs cobalt
Demux engines available: poll select
What could be the possible reason for this?
Thanks in advance.

Create ceph-volumes osd. not found:divice not cleared

I have some problem with creating volumes.
I am creating docker-compose ceph.
In monitor docker on bash type
ceph-volume inventory
and get:
/dev/vdb 20.00 GB True True
/dev/vdc 20.00 GB True True
/dev/vdd 20.00 GB True True
/dev/vda 100.00 GB True False
I try create volume with command:
ceph-volume lvm batch --bluestore /dev/vdb /dev/vdc /dev/vdd
get
--> DEPRECATION NOTICE
--> You are using the legacy automatic disk sorting behavior
--> The Pacific release will change the default to --no-auto
--> passed data devices: 3 physical, 0 LVM
--> relative data size: 1.0
Total OSDs: 3
Type Path LV Size % of device
data /dev/vdb 20.00 GB 100.00%
data /dev/vdc 20.00 GB 100.00%
data /dev/vdd 20.00 GB 100.00%
--> The above OSDs would be created if the operation continues
--> do you want to proceed? (yes/no) yes
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ef8b1bb4-71b4-40b3-8476-891fda560ffa
Running command: /usr/sbin/vgcreate --force --yes ceph-42e0a3bb-5c3c-4f99-ab37-a651e23b0a60 /dev/vdb
stdout: Physical volume "/dev/vdb" successfully created.
stdout: Volume group "ceph-42e0a3bb-5c3c-4f99-ab37-a651e23b0a60" successfully created
Running command: /usr/sbin/lvcreate --yes -l 5119 -n osd-block-ef8b1bb4-71b4-40b3-8476-891fda560ffa ceph-42e0a3bb-5c3c-4f99-ab37-a651e23b0a60
stderr: /dev/ceph-42e0a3bb-5c3c-4f99-ab37-a651e23b0a60/osd-block-ef8b1bb4-71b4-40b3-8476-891fda560ffa: not found: device not cleared
Aborting. Failed to wipe start of new LV.
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
stderr: purged osd.0
--> RuntimeError: command returned non-zero exit status: 5
Someone know how to fix it?

I can't mount cephfs to my computer. How can i solve this problem?

I have a cephfs and I need to mount this file system.
I have two pools cephfs_data and cephfs_meta.
ceph -s output is:
cluster:
id: 9f3e7f80-4515-4b5f-92f0-4eb49f3cbf44
health: HEALTH_OK
services:
mon: 2 daemons, quorum mon1,osd0
mgr: osd0(active), standbys: mon1
mds: mycephfs-1/1/1 up {0=mon1=up:active}
osd: 1 osds: 1 up, 1 in
data:
pools: 3 pools, 72 pgs
objects: 24 objects, 35 KiB
usage: 1.1 GiB used, 837 GiB / 838 GiB avail
pgs: 72 active+clean
I created a user with this properties:
[client.foo]
key = AQA4d5xdlAklBxAA+Q5T+b3HLAxj2kRKzXUOSA==
caps mds = "allow r"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=mycephfs"
And when i try run this command:
sudo mount -t fuse.ceph conf=/etc/ceph/ceph.conf /mnt/cephfs/
this happens:
mount: /mnt/cephfs: wrong fs type, bad option, bad superblock on conf=/etc/ceph/ceph.conf, missing codepage or helper program, or other error.
or
when i try run this command:
sudo mount.ceph mon1:6789:/ /mnt/cephfs/
this happens:
mount error 110 = Connection timed out
or
when i try run this command:
sudo ceph-fuse -n client.foo /mnt/cephfs/
this happens:
ceph-fuse[64711]: starting ceph client
2019-10-21 16:21:17.329932 7f58cedbb500 -1 init, newargv = 0x55a6c11f0340 newargc=9
and indifinite pending. I can't see "starting fuse".
.
Where is my fault? Which way i should follow?
The syntax of your commands is incorrect.
You can mount the CephFS using
mount -t ceph mon1:6789:/ /mnt/ceph -o name=foo,secretfile=/path/to/keyring/file
There are many options you can use for the mount that can be found in the mount.ceph Documentation

Altera Quartus falsly says Modelsim isn't installed

Installed Quartus 13.0 with Modelsim in Fedora 22 64-bit. Running Quartus in 32-bit because I get lots and lots of problems otherwise. However, I can start Quartus, create a project, synthesize it, fire up the simulation window and configure the in signals. Then, when clicking the button for launching Modelsim, it starts doing it's job, but ends up with
ModelSim-Altera was not found. Please install ModelSim-Altera which is included with the Quartus II installer, or use the Quartus II Simulator instead by selecting "Simulation > Options > Quartus II Simulator"
This is simply not true. I can start Modelsim myself by running vsim. Here follows the full output. Any suggestions to resolve this will be +1 and no suggestions which would make sense will be punished by me.
Device family: Cyclone II
Running quartus eda_testbench
>> quartus_eda --gen_testbench --check_outputs=on --tool=modelsim_oem --format=verilog grindar -c grindar {--vector_source=/home/johan/Projects/Studies/vhdl/labs/lab1/and_grind.vwf} {--testbench_file=./simulation/qsim/grindar.vt}
PID = 20951
*******************************************************************
Running Quartus II 32-bit EDA Netlist Writer
Version 13.0.1 Build 232 06/12/2013 Service Pack 1 SJ Web Edition
Processing started: Sat Sep 12 20:31:33 2015
Command: quartus_eda --gen_testbench --check_outputs=on --tool=modelsim_oem --format=verilog grindar -c grindar --vector_source=/home/johan/Projects/Studies/vhdl/labs/lab1/and_grind.vwf --testbench_file=./simulation/qsim/grindar.vt
Selected device EP2C35F672C6 for design "grindar"
Generated Verilog Test Bench File ./simulation/qsim/grindar.vt for simulation
Quartus II 32-bit EDA Netlist Writer was successful. 0 errors, 0 warnings
Peak virtual memory: 318 megabytes
Processing ended: Sat Sep 12 20:31:34 2015
Elapsed time: 00:00:01
Total CPU time (on all processors): 00:00:01
Running quartus eda_func_netlist
>> quartus_eda --functional=on --simulation --tool=modelsim_oem --format=verilog grindar -c grindar
PID = 20953
*******************************************************************
Running Quartus II 32-bit EDA Netlist Writer
Version 13.0.1 Build 232 06/12/2013 Service Pack 1 SJ Web Edition
Processing started: Sat Sep 12 20:31:36 2015
Command: quartus_eda --functional=on --simulation=on --tool=modelsim_oem --format=verilog grindar -c grindar
Selected device EP2C35F672C6 for design "grindar"
Generated file grindar.vo in folder "/home/johan/Projects/Studies/vhdl/labs/lab1/simulation/modelsim/" for EDA simulation tool
Quartus II 32-bit EDA Netlist Writer was successful. 0 errors, 0 warnings
Peak virtual memory: 318 megabytes
Processing ended: Sat Sep 12 20:31:37 2015
Elapsed time: 00:00:01
Total CPU time (on all processors): 00:00:01
*******************************************************************
ModelSim-Altera was not found. Please install ModelSim-Altera which is included with the Quartus II installer, or use the Quartus II Simulator instead by selecting "Simulation > Options > Quartus II Simulator"
Please check if the path to the ModelSim binary is correctly specified under
Tools -> Options
I am in Windows, but hopefully the settings should be the same under Linux

Can i decide how much memory to allocate in LSF queue

Is there any option to decide how much memory I can allocate in LSF?
I tried
bsub -R "rusage[mem=10000]" sleep 1000s
But when i checked resource using "bjobs -l "
I get this:
Job <203180>, User <xxxxx>, Project <default>, Status <RUN>, Queue <medium>,
Job Priority <50>, Command <sleep 1000s>
Thu Apr 12 09:49:56: Submitted from host <xxxx>, CWD <xx>, Requested Resources <rusa
ge[mem=10000]>;
Thu Apr 12 09:49:58: Started on <xxxx>, Execution Home <xxxx>, E
xecution CWD <xxxxx>;
Thu Apr 12 09:49:58: Resource usage collected.
MEM: 3 Mbytes; SWAP: 16 Mbytes; NTHREAD: 1
PGID: 28231; PIDs: 28231
Where am I wrong?
bsub -R "rusage[mem=10000]": will initially reserve 10000 MBytes of memory.
Whereas:
"MEM: 3 Mbytes" is the total resident memory usage of all currently running processes in your job.
"SWAP: 16 Mbytes" is the total virtual memory usage of all currently running processes in your job.
The values "3 Mbytes" and "16 Mbytes" may change during the runtime.
In my system we use -M, say bsub -M 1 to request 1 G of memory limit, the job is killed if it goes above that limit.

Resources