I have a TCP Application module built as "ClientSideApplication". I need to install this application on an AODVRouter host "host". I can not figure out a way to do that.
In my omnetpp.ini file i have,
**.host[*].numTcpApps = 1
**.host[*].tcpApp[*].typename = "ClientSideApplication"
**.host[*].tcpApp[*].active = true
**.host[*].tcpApp[*].localAddress = ""
**.host[*].tcpApp[*].localPort = -1
**.host[*].tcpApp[*].connectAddress = "ideHost"
**.host[*].tcpApp[*].connectPort = 1000
**.host[*].tcpApp[*].tOpen = 0.5s
But i am not sure that if it is enough, because when i am seeing the simulation, there is a "waiting" (text) written on the top of the Application Module. Can you guys tell me the way to install the TCP app and make it work with AODVRouter host ("host").
Note: I have implemented the TCP Application module as an extension of TCPAppBase and used the example of TCPSessionApp from inet source folder.
Related
I am wondering how to configure vector.dev to recieve syslog from other lxc containers. I have docker-compose running and vector installed on one container. The other containers host PBX and I'm wondering how I would go about configuring this to have one syslog central server using vector?
I believe I have to create a socket but my current configuration is just this in the vector.toml file:
[sources.syslog]
type = "syslog"
address = "0.0.0.0:514"
max_length = 102_400
mode = "udp"
path = "/vector.socket"
[sources.in]
type = "stdin"
[sinks.out]
inputs = ["in"]
type = "console"
encoding.codec = "text"
This is on the host currently. I believe I'm suppose to install vector on the instances I want to get logs from too ?
I have a raspberry pi with gitlab-runner installed (linux version) and a git repository on gitlab.com (not self hosted).
At the beginning of pipeline, gitlab-runner on raspberry try to fetch the .git repo but I get :
Could not resolve host: gitlab.com
I tried :
ping gitlab.com is ok on the raspberry
Add extra_host = ['localhost:my.ip.ad.ress] --> No changes
Add netword_mode = "gitlab_default" like this, And get :
This error :
Error response from daemon: network gitlab_default not found (exec.go:57:1s)
I am in the simplest configuration with repo on gitlab.com and a gitlab-runner on raspberry. How can I deal with it ?
Here is the config.toml :
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab runner on raspberryPi"
url = "https://gitlab.com/"
token = "XXXX"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "node:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
I had same issue , my gitlab-runner was running on my local. I restarted my docker
systemctl restart docker
and error went away.
Not being able to resolve the host name can have multiple root-causes:
IP forwarding disabled?
Routing might be disabled on your system. Check if IP forwarding is enabled (== 1).
cat /proc/sys/net/ipv4/ip_forward
1
If it's disabled, it will return 0, please enable it by editing a sysctl file. For example edit and add: /etc/sysctl.d/99-sysctl.conf:
net.ipv4.conf.all.forwarding = 1
net.ipv4.ip_forward = 1
Apply the setting without rebooting: sudo sysctl --system
Important Note: Even if the system is reporting that IP forwarding is currently enabled, you might want to set to explicitly and correctly in your sysctl configs. Since Docker will run sysctl -w net.ipv4.ip_forward=1 when the Daemon starts-up. But that is not a persistent setting, and might cause very random issues!! Like you have.
DNS missing / invalid?
You can try if setting a DNS server to 8.8.8.8 might fix the problem:
[runners.docker]
dns = ["8.8.8.8"]
Add extra_host?
You can also try to add an extra host, which might be mainly relevant within a local network (so not with gitlab.com domain).
[runners.docker]
extra_hosts = ["gitlab.yourdomain.com:192.168.xxx.xxx"]
Using host network
I really do not advise this, but you could configure the Docker container to run with network_mode as "host". Again, only do this for debugging reasons:
[runners.docker]
network_mode = "host"
When I analyze a file using Cuckoo
These error I have.
File "c:\python27\lib\site-packages\cuckoo\auxiliary\sniffer.py", line 157, in stop
(out, err, faq("permission-denied-for-tcpdump"))
CuckooOperationalError: Error running tcpdump to sniff the network traffic during the analysis; stdout = '' and stderr = 'tcpdump.exe: listening on VirtualBox Host-Only Ethernet Adapter\r\ntcpdump.exe: Error opening adapter: \xbd\xc3\xbd\xba\xc5\xdb\xc0\xcc \xc1\xf6\xc1\xa4\xb5\xc8 \xc0\xe5\xc4\xa1\xb8\xa6 \xc3\xa3\xc0\xbb \xbc\xf6 \xbe\xf8\xbd\xc0\xb4\xcf\xb4\xd9. (20)\r\n'. Did you enable the extra capabilities to allow running tcpdump as non-root user and disable AppArmor properly (the latter only applies to Ubuntu-based distributions with AppArmor, see also https://cuckoo.sh/docs/faq/index.html#permission-denied-for-tcpdump)?
My Virtualbox network(guest) name is VirtualBox Host-Only Ethernet Adapter
and my Windows10(host) is installed Windump(renamed as tcpdump.exe), Path is C:\tools\tcpdump.exe
also I set auxiliary.conf file.
# Specify the path to your local installation of tcpdump. Make sure this
# path is correct.
tcpdump = C:/tools/tcpdump.exe
My question is that why I'm getting an error like listening on VirtualBox Host-Only Ethernet Adapter\r\ntcpdump.exe: even though setting a tcpdump.exe path currectly.
I found the answer.
Confugured this line in sniffer.py.
From
err_whitelist_start = (
"tcpdump: listening on ",
"C:/tools/tcpdump.exe: listening on",
)
To
err_whitelist_start = (
"tcpdump: listening on ",
"C:\\tools\\tcpdump.exe: listening on",
)
And my virtualbox interface is wrong. So changed this
virtualbox.conf
From
interface = virtualBox Host-Only Ethernet Adapter
To
interface= \Device\NPF_{ED29CFE9-25EB-4AD9-B2EA-C09A93D465BF}
I have achieved a small test cloud on 3 pieces of hardware. It works fine when in EDGE mode but when I try to configure it for VPCMIDO, new instances begin to launch but then timeout and move to a terminated state. I can also see the instances' initial volume and config data appear in the NC and CC data directories. Below is my system layout and network.json.
HOST 1 : CLC/UFS/WALRUS/MIDO CLUSTER/MIDO GATEWAY/MIDOLMAN AGENT:
em1 (All Services including Mido Cluster): 10.0.0.21
em3 (Target VPCMIDO Adapter): 10.0.0.22
HOST 2 : CC/SC
em1 : 10.0.0.23
HOST 3 : NC/MIDOLMAN AGENT
em1 : 10.0.0.24
{
"Mido": {
"Gateways": [
{
"Ip": "10.0.0.22",
"ExternalDevice": "em3",
"ExternalCidr": "192.168.0.0/16",
"ExternalIp": "192.168.0.2",
"ExternalRouterIp": "192.168.0.1"
}
]
},
"Mode": "VPCMIDO",
"PublicIps": [
"10.0.100.1-10.0.100.254"
]
}
I may be misunderstanding the intent of reserving an interface just for the mido gateway. All of my eucalyptus/zookeeper/cassandra/midonet configs use the 10.0.0.21 interface and seem to communicate fine. The midonet tunnel zone reports my CLC host and NC host successfully in the tunnel zone. The only part of my config that references the interface I intend to use for the midonet gateway is the network.json. No errors were returned at any time during my config so I think I may be missing something conceptual.
You may need to start eucanetd as described here:
https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/starting_euca_clc.html
The eucanetd component in vpcmido mode runs on the cloud controller and is responsible for controlling midonet.
When eucanetd is not running instances will fail to start as the required network resources will not be created.
I configured a bridge on the NC and instances were able to launch and I no longer got an error in my nc.log. Docs and the eucalyptus.conf file comments tell me I shouldn't need to do this in VPCMIDO netowrking mode: https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configuring_bridge.html
Despite all that adding the bridge fixed this issue.
I am using plone.recipe.varnish 1.2.2 in my Plone application.
Below is a section of my buildout:
parts =
...
instance
paster
varnish-build
varnish
plonesite
...
[varnish-build]
recipe = zc.recipe.cmmi
url = http://downloads.sourceforge.net/project/varnish/varnish/2.1.3/varnish-2.1.3.tar.gz
[varnish]
recipe = plone.recipe.varnish
daemon = ${buildout:parts-directory}/varnish-build/sbin/varnishd
bind = 127.0.0.1:8000
backends = 127.0.0.1:9000
cache-size = 1G
I cannot conclusively determine if it works. My Plone application serves on port 9000. So I want to test if varnish really works by going to http://localhost:8000 but I get nothing. The browser says "Firefox can't establish a connection to the server at 127.0.0.1:8000."
Am I doing this wrong? I have followed the instructions provided here but no headway.
How does one really configure plone.recipe.varnish in Plone, and how do you actually test that it works in local development machine?
The recipe does not start your varnish server. It only configures it for you.
Use something like supervisord to manage the process, or start it by hand with bin/varnish.