Does ns2 has capability to implement store-carry-forward mechanism? If it has how to do that? - networking

I don't know ns 2 has capability to do this but I want to implement store-carry-forward mechanism in ns 2. However, I don't know where to start. Also I don't know what its steps. What protocols this mechanism uses? Is there anybody who can help me?

ns2 + DTN
If you have other ns2 builds / installs : Rename the executable´s ns to a new name → ns-orig, 'ns-app-name', etc. And remove any ns2 *PATH text from .bashrc .
Build ns2 + DTN
tar xvf ns-allinone-2.35_gcc5.tar.gz
https://drive.google.com/file/d/0B7S255p3kFXNVVlxR0ZNRGVORjQ/view?usp=sharing
cd ns-allinone-2.35/
zcat dtn_ns235.patch.gz | patch -p0
./install
cd ns-2.35/
sudo make install
Simulation : Copy an example (from ns2dtn_campaign/) to ns-allinone-2.35/, and run ./simulate_dtn.sh. The location is important as this path is used : ../ns-allinone-2.35/dei80211mr-1.1.4/src/.libs/libdei80211mr.so
Example, simulation files (and one empty folder) to be copied : { bundle-test-large-scen.tcl, create-traffic-file.tcl, scen_n40_pt2_ms20_t5000_x2000_y2000, simulate_dtn.sh, Run1/ }.
Please note that the simulation time is an hour (or more).
Watch the trace file qtrace.tr : Will very slowly increase to ~9MB.
Result : ns-allinone-2.35/Run1/{ bundle_delays.tr, qtrace.tr, receipt_delays.tr }. The files can be used with Xgraph.

Related

Informatica IPC - UNIX script fail

I have created a unix script to be executed after the session finished.
The script basically counts the lines of specific file and then creates a trailer with this specific structure:
T000014800000000000000000000000000000
T - for trailer
0000148 - number of lines
00000000000000000000000000000 - filler
I have tested the script in Mac, I know already that environments are totally different, but I want to know what is needed to be changed in order to execute this script successfully in IPC.
After execution I get the following error message:
The shell command failed with exit code 126.
I invoke the script as follows:
sh -c "$PMRootDir/scripts/exec_trailer_unix.sh $PMRootDir/TgtFiles"
#! /bin/sh
TgtFiles=$1
TgtFilesBody=$TgtFiles/body.txt
TgtFilesTrailer=$TgtFiles/trailer.txt
string1=$(sed -n '$=' $TgtFilesBody)
pad=$(printf '%0.1s' "0"{1..8})
padlength=8
string2='T'
string3=$(printf '%s%*.*s%s\n' "$string2" 0 $((padlength - ${#string1} - ${#string2} )) "$pad" "$string1")
string4='00000000000000000000000000000'
string5=$(printf '%s%*.*s%s\n' "$string3" 0 $((${#string3} - ${#string4} )) "$string4")
echo $string5 > $TgtFilesTrailer
Any idea would be great.
Thanks in advance.
Please check below points.
it looks like permission issue. Please login using informatica user(the user that runs infa demon) and run this command. You should be able to get the errors.
sh -c "$PMRootDir/scripts/exec_trailer_unix.sh $PMRootDir/TgtFiles"
Sometime the server variable $PMRootDir in UNIX doesnt get interpreted and can result null value. Please use echo $PMRootDir to check if its working after logging into UNIX using above user.
You can create trailer file using Infa easily.
Just add an aggregator transformation right before actual target( group by a dummy field to calculate count(*)). Then add an expression transformation to create those strings. And then trailer file target. Just 3 more transformations.
| --> AGG --> EXP --> Trailer Target file
Final Tr --|--> Final Target

Failed to create wallet for ton with Fift?

Right now I'm trying to create wallet for TON.
I downloaded and built Fift interpreter an was trying to create new wallet with: ./crypto/fift new-walelt.fif
[ 1][t 0][1559491459.312618017][fift-main.cpp:147] Error interpreting standard preamble file `Fift.fif`: cannot locate file `Fift.fif`
Check that correct include path is set by -I or by FIFTPATH environment variable, or disable standard preamble by -n.
Although my path variable is set. Could anyone please help me with this?
First, locate {{lite-client-source-direcotry}}/crypto/fift
This is not the build directory, that's the directory where are the source files (lite-client that you downloaded). So verify you have that it contains Fift.fif file.
If you installed it in the user working directory, it should be:
~/lite-client/crypto/fift/
Now, you should either set FIFTPATH variable to point to this directory or run fift with -I option:
export FIFTPATH=~/lite-client/crypto/fift/
./crypto/fift new-walelt.fif
Or
./crypto/fift -I~/lite-client/crypto/fift/ new-walelt.fif
Have you tried ./crypto/fift -I<source-directory>/crypto/fift new-wallet.fif instead of setting environment variable? Are Fift.fif and Asm.fif library files inside FIFTPATH?
Make sure you have followed all the instruction written here:
https://test.ton.org/HOWTO.txt
It should work if you do all the above instruction correctly. If not, it might be a bug. Remember that TON is in a very early beta strage. You can submit the issue here:
https://github.com/copperbits/TON/issues
You also can use this:
cd ~/liteclient-build
crypto/fift -I/root/lite-client/crypto/fift/lib -s /root/lite-client/crypto/smartcont/new-wallet.fif -1 wallet_name
Try this (worked for me)
export FIFTPATH=~/lite-client/crypto/fift/lib
./crypto/fift new-wallet.fif

Wrong directory for sipcfg.sip_module_dir

By trying to compile QGIS from sources on Ubuntu, there is the file /opt/QGIS/cmake/FindSIP.py which content is:
import sipconfig
sipcfg = sipconfig.Configuration()
print("sip_version:%06.0x" % sipcfg.sip_version)
print("sip_version_num:%d" % sipcfg.sip_version)
print("sip_version_str:%s" % sipcfg.sip_version_str)
print("sip_bin:%s" % sipcfg.sip_bin)
print("default_sip_dir:%s" % sipcfg.default_sip_dir)
print("sip_inc_dir:%s" % sipcfg.sip_inc_dir)
# SIP 4.19.10+ has new sipcfg.sip_module_dir
if hasattr(sipcfg, "sip_module_dir"):
print("sip_module_dir:%s" % sipcfg.sip_module_dir)
else:
print("sip_module_dir:%s" % sipcfg.sip_mod_dir)
In Python3.6, the last if/else statement prints:
sip_module_dir:/usr/lib/python3.6/dist-packages
But the string /usr/lib/python3.6/dist-packages doesn't match an existing directory (it is probably part of why I encountered the error:
python/CMakeFiles/python_module_qgis__core.dir/build.make:537: recipe for target 'python/core/sip_corepart0.cpp' failed' failed during build process).
I either have:
/usr/lib/python3/dist-packages
or
/usr/lib/python3.6/site-packages
And it's only in /usr/lib/python3/dist-packages that I have some 'sip' related files (the other directory gives no results):
$ find . -iname "*sip*"
./twisted/protocols/__pycache__/sip.cpython-36.pyc
./twisted/protocols/sip.py
./twisted/test/test_sip.py
./twisted/test/__pycache__/test_sip.cpython-36.pyc
./sipconfig.py
./sip.pyi
./sipconfig_nd6.py
./sipdistutils.py
./__pycache__/sipconfig_nd6.cpython-36.pyc
./__pycache__/sipdistutils.cpython-36.pyc
./__pycache__/sipconfig.cpython-36.pyc
./sip.cpython-36m-x86_64-linux-gnu.so
I guess there is something to fix within 'sip' itself but don't know where exactly, neither how to do that.
More information;
OS: Ubuntu 16.04 64 bits
Python: 3.6.7
Sip: 4.19.7

rpmbuild: how to skip generation of "debuginfo" packages (without change SPEC file ; neither .rpmmacros)

We need to (re)generated third party packages on EL7 but we don't want to change SPEC file as suggested (%define debug_package %{nil} https://www.redhat.com/archives/shrike-list/2003-April/msg00069.html) and neither changing the ~/.rpmmacros file as it is on a shared box for RPM build.
Is there any way to solve this via command line (additional parameter) with rpmbuild?
After many tests I found the solution. In fact, it is possible to define debug_package outside of the SPEC file, using --define. Which gives:
rpmbuild --define "debug_package %{nil}" -ba SPECS/original.spec
Result is: I don't modify the third party SPEC file and no RPM -debuginfo is generated.
rpmbuild --rebuild --nodebuginfo file.src.rpm -- this still generates debuginfo rpms
Another solution:
cat /etc/rpm/macros
%debug_package %{nil}

cmake: Working with multiple output configurations

I'm busy porting my build process from msbuild to cmake, to better be able to deal with the gcc toolchain (which generates much faster code for some of the numeric stuff I'm doing).
Now, I'd like cmake to generate several versions of the output, stuff like one version with sse2, another with x64, and so on. However, cmake seems to work most naturally if you simply have a bunch of flags (say, "sse2_enable", and "platform") and then generate one output based on those platforms.
What's the best way to work with multiple output configurations like this? Intuitively, I'd like to iterate over a large number of flag combinations and rerun the same CMakeLists.txt files for each combination - but of course, you can't express that within the CMakeLists.txt files (AFAIK).
The recommended way to do this is to simply have multiple build directories. From each one you simply call cmake with the required settings.
For example you could do, starting in the base source directory (using Linux shell syntax but the idea is the same):
mkdir build-sse2 && cd build-sse2
cmake .. -DENABLE_SSE2 # or whatever to enable it in your CMakeLists.txt
make
cd ..
mkdir build-x64 && cd build-x64
cmake .. -DENABLE_X64 # or whatever again...
make
This way, each build directory is completely separated from each other.
This allows you to have one directory for Debug, another for Release and another for cross-compiling.
There hasn't been much activity here, so I've come up with a workable solution myself. It's probably not ideal, so if you have a better idea, please do add it!
Now, it's hard to iterate over build configs in cmake because cmake's crucial variables don't live in function scope - so, for instance, that means if you do include_directories(X) the X directory will remain in the include list even after the function exits.
Directories do have scope - and while normally each input directory corresponds to one output directory, you can have multiple output directories.
So, my solution looks like this:
project(FooAllConfigs)
set(FooVar 2)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-2b")
set(FooVar 5)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-5c")
set(FooVar 3)
set(FooAnotherVar b)
add_subdirectory("project_dir" "out-3b")
set(FooVar 3)
set(FooAnotherVar c)
add_subdirectory("project_dir" "out-3c")
The normal project dir then contains a CMakeLists.txt file with code to set up the appropriate includes and compiler options given the global variables set in the FooAllConfigs project, and it also determines a build suffix that's appended to all build outputs - any even indirectly included output (e.g. as generated by add_executable) must have a unique name.
This works fine for me.

Resources