Timeout while proving the WP using Alt-ergo on Frama C - frama-c

I was trying to verify the correctness of the below program using Frama-c.I am new user to frama-C.
PROBLEM:
Input basic salary of an employee and calculate its Gross salary according to following:
Basic Salary <= 10000 : HRA = 20%, DA = 80%
Basic Salary <= 20000 : HRA = 25%, DA = 90%
Basic Salary > 20000 : HRA = 30%, DA = 95%
#include <limits.h>
/*#requires sal >= 0 && sal <= INT_MAX/2;
ensures \result > sal && \result <= INT_MAX[enter image description here][1];
behavior sal1:
assumes sal <= 10000;
ensures \result == sal+(sal*0.2*0.8);
behavior sal2:
assumes sal <= 20000;
ensures \result == sal+(sal*0.25*0.9);
behavior sal3:
assumes sal >20000;
ensures \result == sal+(sal*0.3*0.95);
complete behaviors sal1,sal2,sal3;
*/
double salary(double sal){
if(sal<=10000){return (sal+(sal*0.2*0.8));}
else if(sal<=20000){return (sal+(sal*0.25*0.9));}
else{return (sal+(sal*0.3*0.95));}
}
what mistake am i making here? should the precondition be more precise.
console message:
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_ensures : Timeout (Qed:57ms) (10s)
(cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_3 :
Timeout (Qed:20ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_6 :
Timeout (Qed:2ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_5 :
Timeout (Qed:2ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_4 :
Timeout (Qed:17ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_7 :
Timeout (Qed:15ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_sal1_ensures : Timeout (Qed:33ms)
(10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_9 :
Timeout (Qed:2ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_assert_rte_is_nan_or_infinite_8 :
Timeout (Qed:4ms) (10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_sal2_ensures : Timeout (Qed:42ms)
(10s) (cached)
[wp] [Alt-Ergo 2.3.3] Goal typed_salary_sal3_ensures : Timeout (Qed:35ms)
(10s) (cached)

Automated theorem provers behave generally quite poorly when confronted to floating-point computation (see e.g. this report). If you really, really need them, you may want to install Gappa, which is specialized for that, or hope that using CVC4, Z3 and Alt-Ergo (as opposed to just Alt-Ergo) will allow you to have at least one prover able to discharge each proof obligation. But I'd advise to stick to integer arithmetic, e.g. by using cents as unit in order to only manipulate integers when computing percentages (EDIT: since your multiplying percentages, this would mean working with 1/10000 units, but it still shouldn't be a problem). In fact, if you insist on doubles, the requirements to have values less than INT_MAX does not make much sense.
In the same vein, if you use an integer type, it's probably easier to go for unsigned, which will automatically fulfill the requirement of having a non-negative salary.
Finally, your specification is ambiguous: for any salary less than 10000, you have two distinct formulas to compute the result. the assumes clause of behavior sal2 should probably read: assumes 10000 < sal <= 20000;

Related

Openmpi 4.0.5 fails to distribute tasks to more than 1 node

We are having trouble with openmpi 4.0.5 on our cluster: It works as long as only 1 node is requested, but as soon as more than 1 is requested (e.g. mpirun -np 24 ./hello_world with --ntasks-per-node=12) it crashes and we get the following error message:
--------------------------------------------------------------------------
There are not enough slots available in the system to satisfy the 2
slots that were requested by the application:
./hello_world
Either request fewer slots for your application, or make more slots
available for use.
A "slot" is the Open MPI term for an allocatable unit where we can
launch a process. The number of slots available are defined by the
environment in which Open MPI processes are run:
1. Hostfile, via "slots=N" clauses (N defaults to number of
processor cores if not provided)
2. The --host command line parameter, via a ":N" suffix on the
hostname (N defaults to 1 if not provided)
3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
4. If none of a hostfile, the --host command line parameter, or an
RM is present, Open MPI defaults to the number of processor cores
In all the above cases, if you want Open MPI to default to the number
of hardware threads instead of the number of processor cores, use the
--use-hwthread-cpus option.
Alternatively, you can use the --oversubscribe option to ignore the
number of available slots when deciding the number of processes to
launch.
--------------------------------------------------------------------------
I have tried using --oversubscribe, but this will still only use 1 node, even though smaller jobs would run that way. I have also tried specifically requesting nodes (e.g. -host node36,node37), but this results in the following error message:
[node37:16739] *** Process received signal ***
[node37:16739] Signal: Segmentation fault (11)
[node37:16739] Signal code: Address not mapped (1)
[node37:16739] Failing at address: (nil)
[node37:16739] [ 0] /lib64/libpthread.so.0(+0xf5f0)[0x2ac57d70e5f0]
[node37:16739] [ 1] /lib64/libc.so.6(+0x13ed5a)[0x2ac57da59d5a]
[node37:16739] [ 2] /usr/lib64/openmpi/lib/libopen-rte.so.12(orte_daemon+0x10d7)[0x2ac57c6c4827]
[node37:16739] [ 3] orted[0x4007a7]
[node37:16739] [ 4] /lib64/libc.so.6(__libc_start_main+0xf5)[0x2ac57d93d505]
[node37:16739] [ 5] orted[0x400810]
[node37:16739] *** End of error message ***
The cluster has 59 nodes. Slurm 19.05.0 is used as a scheduler and gcc 9.1.0 to compile.
I don't have much experience with mpi - any help would be much appreciated! Maybe someone is familiar with this error and could point me towards what the problem might be.
Thanks for your help,
Johanna

ROBOCOPY summary says Dirs FAILED but shows no error messages

I copied directories with ROBOCOPY, from C: to D: (so disks on the same VM, no network issues). I used options
*.* /V /X /TS /FP /S /E /COPYALL /PURGE /MIR /ZB /NP /R:3 /W:3
Shortly afterwards, I did a comparison with the same options plus /L:
/V /X /TS /FP /L /S /E /COPYALL /PURGE /MIR /ZB /NP /R:3 /W:3
The summary starts by saying that 12 directories FAILED:
Total Copied Skipped Mismatch FAILED Extras
Dirs : (many) 30 0 0 12 0
Files : (more) 958 (more-958) 0 0 0
By Google(R)-brand Web searches, I see that "FAILED" should have lines above with the word "ERROR". But I can find no such lines. If I do a comparison without listing files or directories,
*.* /X /NDL /NFL /L /S /E /COPYALL /PURGE /MIR /ZB /NP /R:3 /W:3
there are no output rows at all other than the header and summary.
Am I missing some error messages in the megalines of verbose output? Does anyone have any idea how to find the problem, if any? I'm thinking of a recursive dir + a script to do my own diff, to at least check names and sizes.
(updated a couple of hours later:)
I've got this as well. Posting in case it helps anyone get closer to an answer.
126 failed Dirs but that doesn't match the number of "ERROR 3" messages about directories not found / not created (108, which after a lot of effort I cranked out of importing the log file into Excel).
So what happened to the other 18 failed dirs?
Turns out there are 18 error messages about retries exceeded for the directories mentioned in the ERROR 3 messages.
I therefore conclude that the "Failed" count in the RC summary includes each ERROR 3 "directory not found" log item - even if it is multiply reporting the same directory on multiple failures - PLUS the error reported when it finally exceeds its allowed retry count. So in my case, I have 18 failed directories, each of which is reported on the first attempt and then each of the 5 retries I allowed plus again when the retries exceeded message is given. That is: (18 problem directories) * (1 try + 5 retries + 1 exceeded message) = 18 * 7 = 126 fails. Now it is up to you whether or not you sulk about the "fails" not being unique, but that seems to be how they get counted.
Hope that helps.

Arduino OpenOCD command works in IDE but not from CMD prompt. What am I missing? (NRF)

Arduino does the following successfully. But when I try it from the command line it fails. Why is that?
C:\Users\???\AppData\Local\Arduino15\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5/bin/openocd.exe -d2
-f interface/jlink.cfg
-c transport select swd;
-f target/nrf52.cfg
-c program {{C:\???\EddystoneURL.ino.hex}} verify reset; shutdown;
Result:
Open On-Chip Debugger 0.10.0-dev-00254-g696fc0a (2016-04-10-10:13)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
debug_level: 2
swd
adapter speed: 10000 kHz
cortex_m reset_config sysresetreq
jaylink: Failed to open device: LIBUSB_ERROR_NOT_SUPPORTED.
Info : No device selected, using first device.
Info : J-Link OB-SAM3U128-V2-NordicSemi compiled Jan 21 2016 17:58:20
Info : Hardware version: 1.00
Info : VTarget = 3.300 V
Info : Reduced speed from 10000 kHz to 1000 kHz (maximum).
Info : Reduced speed from 10000 kHz to 1000 kHz (maximum).
Info : clock speed 10000 kHz
Info : SWD IDCODE 0x2ba01477
Info : nrf52.cpu: hardware has 6 breakpoints, 4 watchpoints
nrf52.cpu: target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x000008e4 msp: 0x20000400
** Programming Started **
auto erase enabled
Info : nRF51822-QFN48(build code: B00) 512kB Flash
Warn : using fast async flash loader. This is currently supported
Warn : only with ST-Link and CMSIS-DAP. If you have issues, add
Warn : "set WORKAREASIZE 0" before sourcing nrf51.cfg to disable it
wrote 28672 bytes from file C:\???\EddystoneURL.ino.hex in 0.835260s (33.522 KiB/s)
** Programming Finished **
** Verify Started **
verified 26768 bytes in 0.144835s (180.486 KiB/s)
** Verified OK **
** Resetting Target **
shutdown command invoked
When I try the above from the command line I get the following:
C:\WINDOWS\system32>C:\Users\???\AppData\Local\Arduino15\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5/bin/openocd.exe -d2 -f interface/jlink.cfg -c transport select swd; -f target/nrf52.cfg -c program {{C:\???\EddystoneURL.ino.hex}} verify reset; shutdown;
Open On-Chip Debugger 0.10.0-dev-00254-g696fc0a (2016-04-10-10:13)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
debug_level: 2
interface_transports transport ...
transport
transport init
transport list
transport select [transport_name]
transport : command requires more arguments
in procedure 'transport'
I have replaced the full path's to the hex files to make it easier to read.
I am trying to use Arduino as my tool-chain to upload a pre-built binaries with it. From the IDE I can do it but only with the Arduino Built code.
What am I missing?
I figured it out!!!
The command parameters need to be in quotes or Windows will think they are the next parameter because of spaces in them.
I get the feeling folder/file names with spaces will have the same issue.
C:\Users\???\AppData\Local\Arduino15\packages\sandeepmistry\tools\openocd\0.10.0-dev.nrf5/bin/openocd.exe -d2
-f interface/jlink.cfg
-c "transport select swd;"
-f target/nrf52.cfg
-c "program {{C:\???\EddystoneURL.ino.hex}} verify reset; shutdown;"

How to solve " DeprecationWarning: 'GLOBAL' is deprecated, use "when deploy node application with passenger+nginx

My node application can run normally when I was in development mode. Simply use "npm start" to run it and I could visit my app through "http://localhost:3000"
But when I deploy my node application with nginx+passenger, I meet this error,the detailed information is list as below.
I notice the key information must be:
"DeprecationWarning: 'GLOBAL' is deprecated, use 'global'"
but I don't know what does it refer to, because I can't found 'GLOBAL' in my code files;
I am sure the passenger was successfully installed and worked normally, because I could run other simple node application through it but not my app, I don't know why......
I have struggled for the problems for many many hours and searched google for answers, hope you can give me some useful information.
Information
Web application could not be started
An error occurred while starting the web application: it did not write a startup response in time. Please read this article for more information about this problem.
Raw process output:
(node:12671) DeprecationWarning: 'GLOBAL' is deprecated, use 'global'
Error ID
ef585c1b
Application root
/var/www/microblog/myblog
Environment (value of RAILS_ENV, RACK_ENV, WSGI_ENV, NODE_ENV and PASSENGER_APP_ENV)
development
User and groups
uid=0(root) gid=0(root) groups=0(root)
Environment variables
XDG_SESSION_ID=55
COMP_WORDBREAKS=
"'><;|&(:
TERM=xterm-256color
SHELL=/bin/bash
SSH_CLIENT=27.46.7.216 5816 22
SSH_TTY=/dev/pts/3
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arj=01;31:.taz=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.zip=01;31:.z=01;31:.Z=01;31:.dz=01;31:.gz=01;31:.lz=01;31:.xz=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.axa=00;36:.oga=00;36:.spx=00;36:.xspf=00;36:
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
PWD=/var/www/microblog/myblog
LANG=en_US.UTF-8
NODE_PATH=/usr/share/passenger/node
SHLVL=1
HOME=/root
LANGUAGE=en_US:
LOGNAME=root
SSH_CONNECTION=27.46.7.216 5816 123.57.243.29 22
LESSOPEN=| /usr/bin/lesspipe %s
XDG_RUNTIME_DIR=/run/user/0
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/passenger
OLDPWD=/var/www/microblog
PASSENGER_LOCATION_CONFIGURATION_FILE=/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini
PASSENGER_USE_FEEDBACK_FD=true
UID=0
SERVER_SOFTWARE=nginx/1.10.1 Phusion_Passenger/5.0.30
PASSENGER_DEBUG_DIR=/tmp/passenger.spawn-debug.XXXXu3bFF4
IN_PASSENGER=1
PYTHONUNBUFFERED=1
RAILS_ENV=development
RACK_ENV=development
WSGI_ENV=development
NODE_ENV=development
PASSENGER_APP_ENV=development
Ulimits
Unknown
System metrics
------------- General -------------
Kernel version : 3.13.0-86-generic
Uptime : 1d 13h 59m 14s
Load averages : 0.07%, 0.07%, 0.06%
Fork rate : unknown
------------- CPU -------------
Number of CPUs : 1
Average CPU usage : 0% -- 0% user, 0% nice, 0% system, 100% idle
CPU 1 : 0% -- 0% user, 0% nice, 0% system, 100% idle
I/O pressure : 0%
CPU 1 : 0%
Interference from other VMs: 0%
CPU 1 : 0%
------------- Memory -------------
RAM total : 992 MB
RAM used : 224 MB (23%)
RAM free : 767 MB
Swap total : 0 MB
Swap used : 0 MB (-nan%)
Swap free : 0 MB
Swap in : unknown
Swap out : unknown
------------------------------------------------------------------------
ah~~~~~~the topic i posted was a mistake, "GLOBAL" is just the warning!!! what really matter is that:
" An error occurred while starting the web application: it did not
write a startup response in time."
which means my application didn't start up fast or even did't start up at all.so the problem is in my /etc/nginx/sites-enabled .conf file! Because i use express+node mode:
The mistake I made is the start file which only worked for node application :passenger_startup_file app.js;
and it shoule be : passenger_startup_file bin/www

mpi + infiniband too many connections

I am running a MPI application on a cluster, using 4 nodes each with 64 cores.
The application performs an all to all communication pattern.
Executing the application by the following runs fine:
$: mpirun -npernode 36 ./Application
Adding a further process per node let the application crash:
$: mpirun -npernode 37 ./Application
--------------------------------------------------------------------------
A process failed to create a queue pair. This usually means either
the device has run out of queue pairs (too many connections) or
there are insufficient resources available to allocate a queue pair
(out of memory). The latter can happen if either 1) insufficient
memory is available, or 2) no more physical memory can be registered
with the device.
For more information on memory registration see the Open MPI FAQs at:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
Local host: laser045
Local device: qib0
Queue pair type: Reliable connected (RC)
--------------------------------------------------------------------------
[laser045:15359] *** An error occurred in MPI_Issend
[laser045:15359] *** on communicator MPI_COMM_WORLD
[laser045:15359] *** MPI_ERR_OTHER: known error not in list
[laser045:15359] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
[laser040:49950] [[53382,0],0]->[[53382,1],30] mca_oob_tcp_msg_send_handler: writev failed: Connection reset by peer (104) [sd = 163]
[laser040:49950] [[53382,0],0]->[[53382,1],21] mca_oob_tcp_msg_send_handler: writev failed: Connection reset by peer (104) [sd = 154]
--------------------------------------------------------------------------
mpirun has exited due to process rank 128 with PID 15358 on
node laser045 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[laser040:49950] 4 more processes have sent help message help-mpi-btl-openib-cpc-base.txt / ibv_create_qp failed
[laser040:49950] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[laser040:49950] 4 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
EDIT added some source code of all to all communication pattern:
// Send data to all other ranks
for(unsigned i = 0; i < (unsigned)size; ++i){
if((unsigned)rank == i){
continue;
}
MPI_Request request;
MPI_Issend(&data, dataSize, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &request);
requests.push_back(request);
}
// Recv data from all other ranks
for(unsigned i = 0; i < (unsigned)size; ++i){
if((unsigned)rank == i){
continue;
}
MPI_Status status;
MPI_Recv(&recvData, recvDataSize, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &status);
}
// Finish communication operations
for(MPI_Request &r: requests){
MPI_Status status;
MPI_Wait(&r, &status);
}
Is there something I can do as cluster user or some advices I can give the cluster admin ?
The line mca_oob_tcp_msg_send_handler error line may indicate that the node corresponding to a receiving rank died (ran out of memory or received a SIGSEGV):
http://www.open-mpi.org/faq/?category=tcp#tcp-connection-errors
The OOB (out-of-band) framework in Open-MPI is used for control messages, not for the messages of your applications. Indeed, messages typically go throught byte transfer layers (BTLs) such as self, sm, vader, openib (Infiniband), and so on.
The output of 'ompi_info -a' is useful in that regard.
Finally, it is not specified in the question is the Infiniband hardware vendor is Mellanox, so the XRC option may not work (for instance, Intel/QLogic Infiniband does not support this option).
The error is connected to the buffer size
of the mpi message queues commented here:
http://www.open-mpi.org/faq/?category=openfabrics#ib-xrc
The following environment setting solved my problem:
$ export OMPI_MCA_btl_openib_receive_queues="P,128,256,192,128:S,65536,256,192,128"

Resources