Logstash output not recognising columns in kibana - kibana

I'm trying to get my .CSV file in Kibana for visualisation. It feels like I'm close to get it work but I can`t figure out how to get my output right.
In Kibana I see my .csv file as:
message: News,test#email.com,10.10.10.10
It looks like my CSV ouput is in 1 field called message. I would like to get 3 different fields: Name,Email,IP. I have tried a lot of csv files and different codes but no success yet.
CSV FILE:
Name,Email,IP
Auto,auto#newsuk,10.0.0.196
News,test#email.com,10.10.10.10
nieuwsbrieven,nieuwsbrieven#nl,10.10.10.10
CONF file:
input {
file {
path => "C:\Users\JOEY2\Desktop\Deelproblemen\Applicatie\Output\test.csv"
start_position => beginning
sincedb_path => "/dev/null"
}}
filter {
csv {
separator => ","
columns => ["Date","Open","High"]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "csv_index"
}
stdout {}
}
filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- C:\Users\JOEY2\Desktop\Deelproblemen\Applicatie\Output\test.csv
output.elasticsearch:
hosts: ["localhost:9200"]
template.name: "testttt"
template.overwrite: true
output.logstash:
hosts: ["localhost:5044"]
Logstash CMD output:
[2017-10-12T13:53:52,682][INFO ][logstash.pipeline ] Pipeline main started
[2017-10-12T13:53:52,690][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2017-10-12T13:53:53,003][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
{
"#timestamp" => 2017-10-12T11:53:53.659Z,
"offset" => 15,
"#version" => "1",
"input_type" => "log",
"beat" => {
"name" => "DESKTOP-VEQHHVT",
"hostname" => "DESKTOP-VEQHHVT",
"version" => "5.6.2"
},
"host" => "DESKTOP-VEQHHVT",
"source" => "C:\\Users\\JOEY2\\Desktop\\Deelproblemen\\Applicatie\\Output\\test.csv",
"message" => "Name,Email,IP",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
{
"#timestamp" => 2017-10-12T11:53:53.659Z,
"offset" => 44,
"#version" => "1",
"input_type" => "log",
"beat" => {
"name" => "DESKTOP-VEQHHVT",
"hostname" => "DESKTOP-VEQHHVT",
"version" => "5.6.2"
},
"host" => "DESKTOP-VEQHHVT",
"source" => "C:\\Users\\JOEY2\\Desktop\\Deelproblemen\\Applicatie\\Output\\test.csv",
"message" => "Auto,auto#newsuk,10.0.0.196",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
{
"#timestamp" => 2017-10-12T11:53:53.659Z,
"offset" => 77,
"#version" => "1",
"beat" => {
"name" => "DESKTOP-VEQHHVT",
"hostname" => "DESKTOP-VEQHHVT",
"version" => "5.6.2"
},
"input_type" => "log",
"host" => "DESKTOP-VEQHHVT",
"source" => "C:\\Users\\JOEY2\\Desktop\\Deelproblemen\\Applicatie\\Output\\test.csv",
"message" => "News,test#email.com,10.10.10.10",
"type" => "log",
"tags" => [
[0] "beats_input_codec_plain_applied"
]
Al my CSV colums/rows are getting in the variable message.
Curl command output: (curl -s localhost:9200/_cat/indices?v)
yellow open filebeat-2017.10.12 ux6-ByOERj-2XEBojkxhXg 5 1 3 0 13.3kb 13.3kb
enter code here
Terminal ELAC OUTPUT:
[2017-10-12T13:53:11,763][INFO ][o.e.n.Node ] [] initializing ...
[2017-10-12T13:53:11,919][INFO ][o.e.e.NodeEnvironment ] [Zs6ZAuy] using [1] data paths, mounts [[(C:)]], net usable_space [1.9tb], net total_space [1.9tb], spins? [unknown], types [NTFS]
[2017-10-12T13:53:11,920][INFO ][o.e.e.NodeEnvironment ] [Zs6ZAuy] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-10-12T13:53:12,126][INFO ][o.e.n.Node ] node name [Zs6ZAuy] derived from node ID [Zs6ZAuyyR2auGVnPoD9gRw]; set [node.name] to override
[2017-10-12T13:53:12,128][INFO ][o.e.n.Node ] version[5.6.2], pid[3384], build[57e20f3/2017-09-23T13:16:45.703Z], OS[Windows 10/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2017-10-12T13:53:12,128][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Delasticsearch, -Des.path.home=C:\ELK-Stack\elasticsearch\elasticsearch-5.6.2]
[2017-10-12T13:53:13,550][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [aggs-matrix-stats]
[2017-10-12T13:53:13,616][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [ingest-common]
[2017-10-12T13:53:13,722][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [lang-expression]
[2017-10-12T13:53:13,798][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [lang-groovy]
[2017-10-12T13:53:13,886][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [lang-mustache]
[2017-10-12T13:53:13,988][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [lang-painless]
[2017-10-12T13:53:14,059][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [parent-join]
[2017-10-12T13:53:14,154][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [percolator]
[2017-10-12T13:53:14,223][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [reindex]
[2017-10-12T13:53:14,289][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [transport-netty3]
[2017-10-12T13:53:14,360][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] loaded module [transport-netty4]
[2017-10-12T13:53:14,448][INFO ][o.e.p.PluginsService ] [Zs6ZAuy] no plugins loaded
[2017-10-12T13:53:18,328][INFO ][o.e.d.DiscoveryModule ] [Zs6ZAuy] using discovery type [zen]
[2017-10-12T13:53:19,204][INFO ][o.e.n.Node ] initialized
[2017-10-12T13:53:19,204][INFO ][o.e.n.Node ] [Zs6ZAuy] starting ...
[2017-10-12T13:53:20,071][INFO ][o.e.t.TransportService ] [Zs6ZAuy] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-10-12T13:53:23,130][INFO ][o.e.c.s.ClusterService ] [Zs6ZAuy] new_master {Zs6ZAuy}{Zs6ZAuyyR2auGVnPoD9gRw}{jBwTE7rUS4i_Ugh6k6DAMg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-10-12T13:53:23,883][INFO ][o.e.g.GatewayService ] [Zs6ZAuy] recovered [5] indices into cluster_state
[2017-10-12T13:53:25,962][INFO ][o.e.c.r.a.AllocationService] [Zs6ZAuy] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-10-12T13:53:25,981][INFO ][o.e.h.n.Netty4HttpServerTransport] [Zs6ZAuy] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2017-10-12T13:53:25,986][INFO ][o.e.n.Node ] [Zs6ZAuy] started
[2017-10-12T13:53:59,245][INFO ][o.e.c.m.MetaDataCreateIndexService] [Zs6ZAuy] [filebeat-2017.10.12] creating index, cause [auto(bulk api)], templates [filebeat, testttt], shards [5]/[1], mappings [_default_]
[2017-10-12T13:53:59,721][INFO ][o.e.c.m.MetaDataMappingService] [Zs6ZAuy] [filebeat-2017.10.12/ux6-ByOERj-2XEBojkxhXg] create_mapping [doc]
Filebeat output:
C:\ELK-Stack\filebeat>filebeat -e -c filebeat.yml -d "publish"
2017/10/12 11:53:53.632142 beat.go:297: INFO Home path: [C:\ELK-Stack\filebeat] Config path: [C:\ELK-Stack\filebeat] Data path: [C:\ELK-Stack\filebeat\data] Logs path: [C:\ELK-Stack\filebeat\logs]
2017/10/12 11:53:53.632142 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.2
2017/10/12 11:53:53.634143 publish.go:228: WARN Support for loading more than one output is deprecated and will not be supported in version 6.0.
2017/10/12 11:53:53.635144 output.go:258: INFO Loading template enabled. Reading template file: C:\ELK-Stack\filebeat\filebeat.template.json
2017/10/12 11:53:53.636144 output.go:269: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: C:\ELK-Stack\filebeat\filebeat.template-es2x.json
2017/10/12 11:53:53.637143 output.go:281: INFO Loading template enabled for Elasticsearch 6.x. Reading template file: C:\ELK-Stack\filebeat\filebeat.template-es6x.json
2017/10/12 11:53:53.638144 client.go:128: INFO Elasticsearch url: http://localhost:9200
2017/10/12 11:53:53.639143 outputs.go:108: INFO Activated elasticsearch as output plugin.
2017/10/12 11:53:53.639143 logstash.go:90: INFO Max Retries set to: 3
2017/10/12 11:53:53.640143 outputs.go:108: INFO Activated logstash as output plugin.
2017/10/12 11:53:53.640143 publish.go:243: DBG Create output worker
2017/10/12 11:53:53.641143 publish.go:243: DBG Create output worker
2017/10/12 11:53:53.641143 publish.go:285: DBG No output is defined to store the topology. The server fields might not be filled.
2017/10/12 11:53:53.642144 publish.go:300: INFO Publisher name: DESKTOP-VEQHHVT
2017/10/12 11:53:53.634143 metrics.go:23: INFO Metrics logging every 30s
2017/10/12 11:53:53.646143 async.go:63: INFO Flush Interval set to: 1s
2017/10/12 11:53:53.647142 async.go:64: INFO Max Bulk Size set to: 50
2017/10/12 11:53:53.647142 async.go:72: DBG create bulk processing worker (interval=1s, bulk size=50)
2017/10/12 11:53:53.648144 async.go:63: INFO Flush Interval set to: 1s
2017/10/12 11:53:53.648144 async.go:64: INFO Max Bulk Size set to: 2048
2017/10/12 11:53:53.649144 async.go:72: DBG create bulk processing worker (interval=1s, bulk size=2048)
2017/10/12 11:53:53.649144 beat.go:233: INFO filebeat start running.
2017/10/12 11:53:53.650144 registrar.go:68: INFO No registry file found under: C:\ELK-Stack\filebeat\data\registry. Creating a new registry file.
2017/10/12 11:53:53.652144 registrar.go:106: INFO Loading registrar data from C:\ELK-Stack\filebeat\data\registry
2017/10/12 11:53:53.654145 registrar.go:123: INFO States Loaded from registrar: 0
2017/10/12 11:53:53.655145 crawler.go:38: INFO Loading Prospectors: 1
2017/10/12 11:53:53.655145 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2017/10/12 11:53:53.656144 prospector.go:124: INFO Starting prospector of type: log; id: 11034545279404679229
2017/10/12 11:53:53.656144 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017/10/12 11:53:53.655145 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/10/12 11:53:53.655145 registrar.go:236: INFO Starting Registrar
2017/10/12 11:53:53.657144 log.go:91: INFO Harvester started for file: C:\Users\JOEY2\Desktop\Deelproblemen\Applicatie\Output\test.csv
2017/10/12 11:53:53.655145 sync.go:41: INFO Start sending events to output
2017/10/12 11:53:58.682432 client.go:214: DBG Publish: {
"#timestamp": "2017-10-12T11:53:53.659Z",
"beat": {
"hostname": "DESKTOP-VEQHHVT",
"name": "DESKTOP-VEQHHVT",
"version": "5.6.2"
},
"input_type": "log",
"message": "Name,Email,IP",
"offset": 15,
"source": "C:\\Users\\JOEY2\\Desktop\\Deelproblemen\\Applicatie\\Output\\test.csv",
"type": "log"
}
2017/10/12 11:53:58.685434 client.go:214: DBG Publish: {
"#timestamp": "2017-10-12T11:53:53.659Z",
"beat": {
"hostname": "DESKTOP-VEQHHVT",
"name": "DESKTOP-VEQHHVT",
"version": "5.6.2"
},
"input_type": "log",
"message": "Auto,auto#newsuk,10.0.0.196",
"offset": 44,
"source": "C:\\Users\\JOEY2\\Desktop\\Deelproblemen\\Applicatie\\Output\\test.csv",
"type": "log"
}
2017/10/12 11:53:58.685434 client.go:214: DBG Publish: {
"#timestamp": "2017-10-12T11:53:53.659Z",
"beat": {
"hostname": "DESKTOP-VEQHHVT",
"name": "DESKTOP-VEQHHVT",
"version": "5.6.2"
},
"input_type": "log",
"message": "News,test#email.com,10.10.10.10",
"offset": 77,
"source": "C:\\Users\\JOEY2\\Desktop\\Deelproblemen\\Applicatie\\Output\\test.csv",
"type": "log"
}
2017/10/12 11:53:58.686434 output.go:109: DBG output worker: publish 3 events
2017/10/12 11:53:58.686434 output.go:109: DBG output worker: publish 3 events
2017/10/12 11:53:58.738437 client.go:667: INFO Connected to Elasticsearch version 5.6.2
2017/10/12 11:53:58.748436 output.go:317: INFO Trying to load template for client: http://localhost:9200
2017/10/12 11:53:58.890446 output.go:324: INFO Existing template will be overwritten, as overwrite is enabled.
2017/10/12 11:53:59.154461 client.go:592: INFO Elasticsearch template with name 'testttt' loaded
2017/10/12 11:54:00.020510 sync.go:70: DBG Events sent: 4
Kibana output:
#timestamp:October 12th 2017, 13:53:53.659 beat.hostname:DESKTOP-VEQHHVT beat.name:DESKTOP-VEQHHVT beat.version:5.6.2 input_type:log message:Auto,auto#newsuk,10.0.0.196 offset:44 source:C:\Users\JOEY2\Desktop\Deelproblemen\Applicatie\Output\test.csv type:log _id:AV8QbyIcTtSiVplm9CwA _type:doc _index:filebeat-2017.10.12 _score:1
#timestamp:October 12th 2017, 13:53:53.659 beat.hostname:DESKTOP-VEQHHVT beat.name:DESKTOP-VEQHHVT beat.version:5.6.2 input_type:log message:News,test#email.com,10.10.10.10 offset:77 source:C:\Users\JOEY2\Desktop\Deelproblemen\Applicatie\Output\test.csv type:log _id:AV8QbyIcTtSiVplm9CwB _type:doc _index:filebeat-2017.10.12 _score:1
#timestamp:October 12th 2017, 13:53:53.659 beat.hostname:DESKTOP-VEQHHVT beat.name:DESKTOP-VEQHHVT beat.version:5.6.2 input_type:log message:Name,Email,IP offset:15 source:C:\Users\JOEY2\Desktop\Deelproblemen\Applicatie\Output\test.csv type:log _id:AV8QbyIcTtSiVplm9Cv_ _type:doc _index:filebeat-2017.10.12 _score:1

You are giving wrong column names in csv filters and column name should be given without double quotes(").
I have tried this and it is working for me. Check if this is working for you. My logstash config file:
input {
file {
path => "/home/quality/Desktop/work/csv.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
columns => [Name,Email,IP]
}
}
output {
elasticsearch {
hosts => "localhost"
index => "csv"
document_type => "csv"
}
stdout { codec => rubydebug}
}

Related

add sdk c++ headers into swift package manager project

I have a c++ project that i want to add to a swift package manager project
the c++ project references headers such as #include <string> this header resides in
iossdk/usr/include/c++/v1
how do i get the swift package manager to include those headers ?
let package = Package(
name: "LibProject",
platforms: [.iOS(.v13)],
products: [
.library(
name: "LibProject",
targets: ["LibModule1", "LibModule2Framework"]),
],
dependencies: [
],
targets: [
.target(
name: "LibModule1",
path: "Sources/LibModule1"),
.target(
name: "LibModule2Framework",
path: "Sources/LibModule2Framework",
publicHeadersPath: ".",
cxxSettings: [
.headerSearchPath("usr/include/c++/v1"),
]
),
.testTarget(
name: "LibModuleTests",
dependencies: ["LibModuleTests"]),
],
cLanguageStandard: .c17,
cxxLanguageStandard: .gnucxx17
)```

swiftpm use binaryTarget got an error 'no such module' when archive

I'm trying to refactor the project using swiftpm and everything works fine, both in the emulator and on my iPhone device. But when I archive the project, I get an error 'no such module 'SFS2XAPIIOS''.
Here's the code of my Package.swift:
// swift-tools-version:5.3
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "BaseIM",
platforms: [
.iOS(.v11)
],
products: [
.library(name: "BaseIM", targets: ["BaseIM"]),
.library(name: "SFS2XAPIIOSX", targets: ["SFS2XAPIIOS"])
],
dependencies: [
.package(name: "BaseTools", url: "http://192.168.1.28:8888/kevin/basetools.git", .branch("master")),
.package(name: "BaseClass", url: "http://192.168.1.28:8888/kevin/baseclass.git", .branch("master")),
.package(name: "MediaKit", url: "http://192.168.1.28:8888/kevin/mediakit.git", .branch("master")),
.package(name: "Realm", url: "https://github.com/realm/realm-cocoa", .upToNextMajor(from: "10.1.4"))
],
targets: [
.target(
name: "BaseIM",
dependencies: [
"SFS2XAPIIOSX", "BaseTools", "BaseClass", "MediaKit",
.product(name: "RealmSwift", package: "Realm")
]
),
.target(
name: "SFS2XAPIIOSX",
dependencies: [
"SFS2XAPIIOS"
],
path: "SFS2XAPIIOS",
cSettings: [
.headerSearchPath("Header.h")
]
),
.binaryTarget(name: "SFS2XAPIIOS", path: "SFS2XAPIIOS/SFS2XAPIIOS.xcframework"),
.testTarget(
name: "BaseIMTests",
dependencies: ["BaseIM"]),
]
)

Fresh install of Trellis by Roots on Ubuntu & VituralBox is missing composer.json under /srv/www/website.com/current

This is the error message I got when I first run 'vagrant provision' (after command 'vagrant up' blocked in 'Mounting NFS shared folders...') under the trellis directory:
TASK [wordpress-install : Install Dependencies with Composer] ******************
System info:
Ansible 2.9.11; Vagrant 2.2.9; Linux
Trellis version (per changelog): "Removes ID from Lets Encrypt bundled certificate and make filename stable"
---------------------------------------------------
Composer could not find a composer.json file in /srv/www/example.com/current
To initialize a project, please create a composer.json file
as described in the https://getcomposer.org/ "Getting Started"
section failed: [default] (item=example.com) =>
{
"ansible_loop_var": "item",
"changed": false,
"item": {
"key": "example.com",
"value": {
"admin_email": "admin#example.test",
"cache": {
"enabled": false
},
"local_path": "../site",
"multisite": {
"enabled": false
},
"site_hosts": [
{
"canonical": "example.test",
"redirects": [
"www.example.test"
]
}
],
"ssl": {
"enabled": false,
"provider": "self-signed"
}
}
},
"stdout": "Composer could not find a composer.json file in /srv/www/example.com/current\nTo initialize a project, please create a composer.json file as described in the https://getcomposer.org/ \"Getting Started\" section\n",
"stdout_lines": [
"Composer could not find a composer.json file in /srv/www/example.com/current",
"To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ \"Getting Started\" section"
]
}
PLAY RECAP *********************************************************************
default : ok=125 changed=83 unreachable=0 failed=1 skipped=34 rescued=0 ignored=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
How do I fix this? Where to find the right composer.json for Trellis, Bedrock, and Sage for local development on Linux?
Ubuntu 19.10

ansible meta: refresh_inventory does not include previously absent hosts in task execution

Sometime ago, somebody suggested using dynamic inventories to generate a different hosts file depending on a location and other variables from a template, but I faced a pretty big issue :
After I create the inventory from a template, I need to refresh it (I do it using meta: refresh_inventory) for Ansible to execute tasks on newly added hosts, however, if the host was not initially in hosts file, ansible does not execute tasks on it. On the other hand, if after changing the host file a host is absent from a newly-formed file, then Ansible omits the host like it should, so the refresh_inventory does half of the work. Is there any way to get around this issue?
E.g. I have 1 task to generate hosts file from template, then refresh inventory, then do a simple task on all hosts, like show message:
tasks:
- name: Creating inventory template
local_action:
module: template
src: hosts.j2
dest: "/opt/ansible/inventories/{{location}}/hosts"
mode: 0777
force: yes
backup: yes
ignore_errors: yes
run_once: true
- name: "Refreshing hosts file for {{location}} location"
meta: refresh_inventory
- name: Force refresh of host errors
meta: clear_host_errors
- name: Show message
debug: msg="This works for this host"
If initial hosts file has hosts A, B, C, D, and the newly created inventory has B, C, D, then all is good:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
However, if newly formed hosts file has hosts B, C, D, E (E not being present at initial hosts file) then again the result is:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
With task for E missing. Now if I replay the playbook, only to add another host, say F, then the result looks like:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
ok: [E] => {
"msg": "This works for this host"
}
But no F, which is already added to the inventory file before the refresh.
So, any ideas?
Quoting from Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target ... The hosts line is a list of one or more groups or host patterns ...
For example, it is possible to create the inventory in the 1st play and use it in the 2nd play. The playbook below
- hosts: localhost
tasks:
- template:
src: hosts.j2
dest: "{{ playbook_dir }}/hosts"
- meta: refresh_inventory
- hosts: test
tasks:
- debug:
var: inventory_hostname
with the template (fit it to your needs)
$ cat hosts.j2
[test]
test_01
test_02
test_03
[test:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.6
ansible_perl_interpreter=/usr/local/bin/perl
give
PLAY [localhost] ****************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [localhost]
TASK [template] *****************************************************************************
changed: [localhost]
PLAY [test] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [test_02]
ok: [test_01]
ok: [test_03]
TASK [debug] ********************************************************************************
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_02] => {
"inventory_hostname": "test_02"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
PLAY RECAP **********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
test_01 : ok=2 changed=0 unreachable=0 failed=0
test_02 : ok=2 changed=0 unreachable=0 failed=0
test_03 : ok=2 changed=0 unreachable=0 failed=0
Even though the first answer provided here is correct I think this deserves an explanation on how refresh_inventory and also add_host behave. As I've seen a few other questions regarding this topic.
It does not matter if you use static or dynamic inventory, the behavior is the same. The only thing specific for dynamic inventory that can change the behavior is caching. The following applies for disabled caching or refreshed cache after adding the new host.
Both refresh_inventory and add_host allow you to execute tasks only in subsequent plays. However they allow you to access hostvars of the added hosts also in the current play. This behavior is partially and very briefly mentioned in the add_host documentation and is easy to miss.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Consider following inventory called hosts_ini-main.ini:
localhost testvar='testmain'
Now you can write a playbook that will observe and test the behavior of refresh_inventory. It overwrites hosts_ini-main.ini inventory file (used by the playbook) with the following contents from the second file hosts_ini-second.ini:
localhost testvar='testmain'
127.0.0.2 testvar='test2'
The playbook prints hostvars before the inventory is changed follows by changing the inventory, refreshing inventory, again printing hostvars and then trying to execute task only on the newly added host.
The second play also tries to execute task only on the added host.
---
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars
debug:
var: hostvars
- name: Print var for first host
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "testmain"
- name: Copy alternate hosts file to main hosts file
copy:
src: "hosts_ini-second.ini"
dest: "hosts_ini-main.ini"
- name: Refresh inventory using meta module
meta: refresh_inventory
- name: Print hostvars for the second time in the first play
debug:
var: hostvars
- name: Print var for added host
debug:
var: testvar # This will not execute
when: hostvars[inventory_hostname]['testvar'] == "test2"
# New play
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars in a different play
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "test2"
Here is the execution (I've truncated parts of the output to make it more readable).
PLAY [all] *******************************************************************************
TASK [Print hostvars] ********************************************************************
ok: [localhost] => {
"hostvars": {
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "testmain"
}
}
}
TASK [Print var for first host] ***********************************************************
ok: [localhost] => {
"testvar": "testmain"
}
TASK [Copy alternate hosts file to main hosts file] ***************************************
changed: [localhost]
TASK [Refresh inventory using meta module] ************************************************
TASK [Print hostvars for the second time in the first play] *******************************
ok: [localhost] => {
"hostvars": {
"127.0.0.2": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "test2"
},
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
...
"testvar": "testmain"
}
}
}
TASK [Print var for added host] ***********************************************************
skipping: [localhost]
PLAY [all] ********************************************************************************
TASK [Print hostvars in a different play] *************************************************
skipping: [localhost]
ok: [127.0.0.2] => {
"testvar": "test2"
}
PLAY RECAP *******************************************************************************
127.0.0.2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
As can be seen the hostvars contain information about the newly added host even in the first play, but Ansible is not able to execute task on the host. When new play is created the task is executed on the new host without problems.

Custom Script Extension not working on Red Hat 7.2

I am unable to get Custom Script Extension working on Red Hat 7.2. I tried the latest extension and have the following in my ARM template -
{
"name": "[concat(parameters('VMNamePrefix'), parameters('startingNumeral')[copyindex()],'/',parameters('VMNamePrefix'), parameters('startingNumeral')[copyindex()],'-CUSTOMSCRIPT')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "[parameters('region')]",
"apiVersion": "[variables('apiVersionVirtualMachines')]",
"tags": {
"ApmID": "[parameters('apmID')]",
"ApplicationName": "[parameters('applicationName')]",
"SharedService": "[parameters('sharedService')]",
"PaaSOnly": "[parameters('paasOnly')]"
},
"copy": {
"name": "customScriptLoop",
"count": "[parameters('vmInstanceCount')]"
},
"dependsOn": [
"[concat(parameters('VMNamePrefix'), parameters('startingNumeral')[copyindex()])]"
],
"properties": {
"publisher": "Microsoft.Azure.Extensions",
"type": "CustomScript",
"typeHandlerVersion": "2.0",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"[variables('customScriptUri')]"
]
},
"protectedSettings": {
"commandToExecute": "[parameters('customScriptCommand')]"
}
}
}
The command to execute is pwd but after like 90 minutes, the extension gives up and I see the following in the waagent.log log file on Red Hat 7.2 -
2018/09/10 13:30:49.361162 INFO [Microsoft.Compute.CustomScriptExtension-1.9.1] Target handler state: enabled
2018/09/10 13:30:49.390061 INFO [Microsoft.Compute.CustomScriptExtension-1.9.1] [Enable] current handler state is: notinstalled
2018/09/10 13:30:49.585331 INFO [Microsoft.Compute.CustomScriptExtension-1.9.1] Initialize extension directory
2018/09/10 13:30:49.615784 INFO [Microsoft.Compute.CustomScriptExtension-1.9.1] Update settings file: 0.settings
2018/09/10 13:30:49.644631 INFO [Microsoft.Compute.CustomScriptExtension-1.9.1] Install extension [install.cmd]
2018/09/10 13:30:50.678474 WARNING [Microsoft.Compute.CustomScriptExtension-1.9.1] [ExtensionError] Non-zero exit code: 127, install.cmd
2018/09/10 13:30:50.713928 INFO [Microsoft.Compute.CustomScriptExtension-1.9.1] Remove extension handler directory: /var/lib/waagent/Microsoft.Compute.CustomScriptExtension-1.9.1
2018/09/10 13:30:50.723392 INFO ExtHandler ProcessGoalState completed [incarnation 4; 1534 ms]
I am not seeing any other logs as well. Any idea what could be going wrong? When I manually go and install the Custom Script extension from the portal, it works fine.
Thanks,
Pranav

Resources