An unexpected error occurred with tcp plugin - tcp

I made a simple logstash configuration:
tcp.conf
input {
tcp {
port => 22
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
stdout { codec => rubydebug }
}
running the configuration:
bin/logstash -f tcp.conf
executing this command:
telnet localhost 22
I get this error:
Using milestone 2 input plugin 'tcp'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
+---------------------------------------------------------+
| An unexpected error occurred. This is probably a bug. |
| You can find help with this problem in a few places: |
| |
| * chat: #logstash IRC channel on freenode irc. |
| IRC via the web: http://goo.gl/TI4Ro |
| * email: logstash-users#googlegroups.com |
| * bug system: https://logstash.jira.com/ |
| |
+---------------------------------------------------------+
The error reported is:
Permission denied - bind(2)
I am doing this configuration fallow the Syslog example

"Permission denied - bind" means that logstash can't attach itself to the listed port.
Often, this is because you're running logstash as a non-privileged user who cannot access ports numbered below 1024.
In your case, you're trying to connect to port 22. As the ssh/scp/sftp port, this seems like an odd place to look for log files.

Related

Simple zeek script

I am totally new to zeek scripting, and I am trying to to a very basic DNS tunnel detector.
Here is my code so far :
export {
const conn_packets_limit = 10;
const conn_time_limit = 30secs;
}
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) {
if (c$duration > conn_time_limit) {
print fmt("Long DNS connexion for %s by %s/%s",c$id$resp_h,c$id$resp_h,c$id$orig_p);
}
}
When i try to run it with zeek -C -r ../capture.pcap ../zeek_scripts/dns/dns.zeek I get the following error : error in ./../zeek_scripts/dns/dns.zeek, line 11: syntax error, at or near "}"
I do not know what I am doing wrong with the print line, could you help me ?
Thank you !

Gatsby build webpack fail with stylis

When I run gatsby build I get this error:
failed Building static HTML for pages - 10.179s
ERROR #95313
Building static HTML failed
See our docs page for more info on this error: https://gatsby.dev/debug-html
343 | for (c = []; b < a; ++b) {
344 | for (var n = 0; n < m; ++n) {
> 345 | c[v++] = Z(d[n] + ' ', h[b], e).trim();
| ^
346 | }
347 | }
348 |
WebpackError: The module '/node_modules/canvas/build/Release/canvas.node'
- stylis.esm.js:345
node_modules/#emotion/stylis/dist/stylis.esm.js:345:1
- stylis.esm.js:151
node_modules/#emotion/stylis/dist/stylis.esm.js:151:1
- stylis.esm.js:175
node_modules/#emotion/stylis/dist/stylis.esm.js:175:1
- stylis.esm.js:286
node_modules/#emotion/stylis/dist/stylis.esm.js:286:1
- stylis.esm.js:151
node_modules/#emotion/stylis/dist/stylis.esm.js:151:1
- stylis.esm.js:175
node_modules/#emotion/stylis/dist/stylis.esm.js:175:1
- stylis.esm.js:286
node_modules/#emotion/stylis/dist/stylis.esm.js:286:1
- stylis.esm.js:151
How to solve? When run gatsby develop there is no error.
Update gatsby-config.js to contain the plugin gatsby-plugin-emotion:
module.exports = {
plugins: [
`gatsby-plugin-emotion`,
],
}
This needs a restart of the gatsby development process.
Add this snippet in the gatsby-node.js:
exports.onCreateWebpackConfig = ({ stage, loaders, actions }) => {
if (stage === "build-html") {
actions.setWebpackConfig({
module: {
rules: [
{
test: /canvas/,
use: loaders.null(),
},
],
},
})
}
}
There's a package that is trying to access global objects such as window or document in your SSR (Server-Side Rendering) where obviously is not defined (it even exist) because gatsby-build occurs in the Node server while gatsby develop occurs in the browser-side, where the window exists and the compilation time. With the snippet above, you are adding a null loader to the offending module when webpack transpile the code.
The rule test is a regular expression (hence the braces, /) that matches the folder name inside node_modules. The output error shows a canvas issue but you may need to change it to /stylis/

ansible meta: refresh_inventory does not include previously absent hosts in task execution

Sometime ago, somebody suggested using dynamic inventories to generate a different hosts file depending on a location and other variables from a template, but I faced a pretty big issue :
After I create the inventory from a template, I need to refresh it (I do it using meta: refresh_inventory) for Ansible to execute tasks on newly added hosts, however, if the host was not initially in hosts file, ansible does not execute tasks on it. On the other hand, if after changing the host file a host is absent from a newly-formed file, then Ansible omits the host like it should, so the refresh_inventory does half of the work. Is there any way to get around this issue?
E.g. I have 1 task to generate hosts file from template, then refresh inventory, then do a simple task on all hosts, like show message:
tasks:
- name: Creating inventory template
local_action:
module: template
src: hosts.j2
dest: "/opt/ansible/inventories/{{location}}/hosts"
mode: 0777
force: yes
backup: yes
ignore_errors: yes
run_once: true
- name: "Refreshing hosts file for {{location}} location"
meta: refresh_inventory
- name: Force refresh of host errors
meta: clear_host_errors
- name: Show message
debug: msg="This works for this host"
If initial hosts file has hosts A, B, C, D, and the newly created inventory has B, C, D, then all is good:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
However, if newly formed hosts file has hosts B, C, D, E (E not being present at initial hosts file) then again the result is:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
With task for E missing. Now if I replay the playbook, only to add another host, say F, then the result looks like:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
ok: [E] => {
"msg": "This works for this host"
}
But no F, which is already added to the inventory file before the refresh.
So, any ideas?
Quoting from Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target ... The hosts line is a list of one or more groups or host patterns ...
For example, it is possible to create the inventory in the 1st play and use it in the 2nd play. The playbook below
- hosts: localhost
tasks:
- template:
src: hosts.j2
dest: "{{ playbook_dir }}/hosts"
- meta: refresh_inventory
- hosts: test
tasks:
- debug:
var: inventory_hostname
with the template (fit it to your needs)
$ cat hosts.j2
[test]
test_01
test_02
test_03
[test:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.6
ansible_perl_interpreter=/usr/local/bin/perl
give
PLAY [localhost] ****************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [localhost]
TASK [template] *****************************************************************************
changed: [localhost]
PLAY [test] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [test_02]
ok: [test_01]
ok: [test_03]
TASK [debug] ********************************************************************************
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_02] => {
"inventory_hostname": "test_02"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
PLAY RECAP **********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
test_01 : ok=2 changed=0 unreachable=0 failed=0
test_02 : ok=2 changed=0 unreachable=0 failed=0
test_03 : ok=2 changed=0 unreachable=0 failed=0
Even though the first answer provided here is correct I think this deserves an explanation on how refresh_inventory and also add_host behave. As I've seen a few other questions regarding this topic.
It does not matter if you use static or dynamic inventory, the behavior is the same. The only thing specific for dynamic inventory that can change the behavior is caching. The following applies for disabled caching or refreshed cache after adding the new host.
Both refresh_inventory and add_host allow you to execute tasks only in subsequent plays. However they allow you to access hostvars of the added hosts also in the current play. This behavior is partially and very briefly mentioned in the add_host documentation and is easy to miss.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Consider following inventory called hosts_ini-main.ini:
localhost testvar='testmain'
Now you can write a playbook that will observe and test the behavior of refresh_inventory. It overwrites hosts_ini-main.ini inventory file (used by the playbook) with the following contents from the second file hosts_ini-second.ini:
localhost testvar='testmain'
127.0.0.2 testvar='test2'
The playbook prints hostvars before the inventory is changed follows by changing the inventory, refreshing inventory, again printing hostvars and then trying to execute task only on the newly added host.
The second play also tries to execute task only on the added host.
---
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars
debug:
var: hostvars
- name: Print var for first host
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "testmain"
- name: Copy alternate hosts file to main hosts file
copy:
src: "hosts_ini-second.ini"
dest: "hosts_ini-main.ini"
- name: Refresh inventory using meta module
meta: refresh_inventory
- name: Print hostvars for the second time in the first play
debug:
var: hostvars
- name: Print var for added host
debug:
var: testvar # This will not execute
when: hostvars[inventory_hostname]['testvar'] == "test2"
# New play
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars in a different play
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "test2"
Here is the execution (I've truncated parts of the output to make it more readable).
PLAY [all] *******************************************************************************
TASK [Print hostvars] ********************************************************************
ok: [localhost] => {
"hostvars": {
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "testmain"
}
}
}
TASK [Print var for first host] ***********************************************************
ok: [localhost] => {
"testvar": "testmain"
}
TASK [Copy alternate hosts file to main hosts file] ***************************************
changed: [localhost]
TASK [Refresh inventory using meta module] ************************************************
TASK [Print hostvars for the second time in the first play] *******************************
ok: [localhost] => {
"hostvars": {
"127.0.0.2": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "test2"
},
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
...
"testvar": "testmain"
}
}
}
TASK [Print var for added host] ***********************************************************
skipping: [localhost]
PLAY [all] ********************************************************************************
TASK [Print hostvars in a different play] *************************************************
skipping: [localhost]
ok: [127.0.0.2] => {
"testvar": "test2"
}
PLAY RECAP *******************************************************************************
127.0.0.2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
As can be seen the hostvars contain information about the newly added host even in the first play, but Ansible is not able to execute task on the host. When new play is created the task is executed on the new host without problems.

openstack sahara error on dashboard

I'm trying to introduce Sahara to my cloud to utilize Hadoop, and it's not going well. I tried to follow Openstack Documents but it didn't really help me. Now I'm trying to add sahara to my dashboard by command "pip install sahara-dashboard".
Sahara Dashboard is located : /usr/local/lib/python2.7/dist-packages/saharadashboard
original Dashboard is located : /usr/share/openstack-dashboard/openstack-dashboard, and I added
INSTALLED_APPS = [
'openstack_dashboard',
'saharadashboard',
'django.contrib.contenttypes',
'django.contrib.auth',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django_pyscss',
'openstack_dashboard.django_pyscss_fix',
'compressor',
'horizon',
'openstack_auth',
]
this to /usr/share/openstack-dashboard/openstack-dashboard/setting.py.
and in : /usr/share/openstack-dashboard/openstack-dashboard/local/local_settings.py , I added
SAHARA_URL='http://localhost:8386/v1.1'
OPENSTACK_API_VERSIONS = {
"data-processing": 1.1,
"identity": 3,
"volume": 2,
"image": 2,
}
"data-processing": 1.1
SAHARA_USE_NEUTRON=True
I can see Sahara managment interface on Dashboard, but I'm getting this error when I try to register image in Image Registry tab of Dashboard. Hope you don't mind Korean in the image. I ought to tell you other things are working fine in my cloud. I searched through all logs related to Sahara, and nothing comes up.
I suspect that thses parts of code are where showing me the error, but don't know how to fix this issue. Please Help!
/usr/local/lib/python2.7/dist-packages/saharadashboard/image_registry/forms.py
glance = importutils.import_any('openstack_dashboard.api.glance',
'horizon.api.glance')
def _get_images(self, request, filter):
try:
images, _more = glance.image_list_detailed(request, filters=filter)
except Exception:
images = []
exceptions.handle(request,
_("Unable to retrieve images with filter %s.") %
filter)
return images
def _get_public_images(self, request):
filter = {"is_public": True,
"status": "active"}
return self._get_images(request, filter)
def _get_tenant_images(self, request):
filter = {"owner": request.user.tenant_id,
"status": "active"}
return self._get_images(request, filter)
UPDATE
glance image-list on controller
+--------------------------------------+------------------------------+
| ID | Name |
+--------------------------------------+------------------------------+
| 28747d2b-c113-4dd3-ad44-908141461e6d | cirros |
| ecb9ac84-7459-4b3b-a832-59329ae1e0ea | github-enterprise-2.6.5 |
| 39ce8087-f95b-4204-bcee-0f084735cba9 | manila-service-image |
| f9a678a8-492f-481e-8c82-5d0c84f69675 | mysqlTest |
| 5ae10b0d-c732-481a-944f-ca3a5a5f4915 | sahara-vanilla-latest-ubuntu |
| f9ea4193-1a92-434d-b247-27b748feb4a1 | Ubuntu Server 14.04 LTS |
+--------------------------------------+------------------------------+
You may have problem with keystone.Did you tried to restart keystone ?

How to detect when networking initialized in /etc/init.d script on redhat 6

I have an init.d script to start my process on boot and requires networking to be initialized. I can use utility nm-online which comes with NetworkManager package but problem will be at deployment where NW will be not installed so I have to have some other reliable option which can tell me network is set and I can connect to other server over network. I can keep trying till I get the networking up or connection is set but that will cause some other problem related to error reporting.
Here is the similar question asked for some other folk.
How to detect when networking initialized in /etc/init.d script?
wait_for_network()
{
[ -z "${LINKDELAY}" ] && LINKDELAY=10
$INFO "Waiting for network..."
if [ -f /usr/sbin/nm-online ]; then
nm-online -q --timeout=$LINKDELAY || nm-online -q -x --timeout=30
else
check_for_network_up $LINKDELAY || check_for_network_up 30
fi
[ "$?" = "0" ] && success "network startup" || failure "network startup"
echo
}
I was trying some other approach where I can check for route table. If network is not up, route command return zero entry but problem is I don’t know real number of route entry. It could be two on one machine where 10 on other machine.
check_for_network_up_old3() {
let no_of_routes=`/bin/netstat -rn | wc -l`
$INFO "netstat result $?"
timeout=$1
while [ "$timeout" != "0" ]; do
let routes=`/sbin/ip route show | wc -l`
$INFO "$routes"
if [ $routes -gt 1 ]; then
return 0
fi
timeout=$((timeout-1))
sleep 1
$INFO "check_for_network_up $timeout"
done
return 1
}

Resources