Re-create openstack artifacts from previous command output? - openstack

Is there an easy way to convert Openstack show command outputs into openstack commands ?
The goal is to rebuild an openstack environment after a complete wipe.
(for example: openstack network show myNet > out.txt,
then somehow generate the Openstack CLI command with appropriate fields to re-create this same exact network, based on out.txt ?)
Thanks!

You can write the output of the show commands as json formated string into a file, so you can easily read the information of the output with python-script to create and execute your desired commands.
To print the output of an openstack-command as json, add a -f json at the end of your command.
Example:
openstack server show cirros -f json
{
"OS-DCF:diskConfig": "MANUAL",
"OS-EXT-AZ:availability_zone": "nova",
"OS-EXT-SRV-ATTR:host": "test-system",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "test-system",
"OS-EXT-SRV-ATTR:instance_name": "instance-00000001",
"OS-EXT-STS:power_state": "Shutdown",
"OS-EXT-STS:task_state": null,
"OS-EXT-STS:vm_state": "stopped",
"OS-SRV-USG:launched_at": "2020-07-22T08:41:06.000000",
"OS-SRV-USG:terminated_at": null,
"accessIPv4": "",
"accessIPv6": "",
"addresses": "test-network=192.168.62.207",
"config_drive": "",
"created": "2020-07-22T08:40:46Z",
"flavor": "f1 (273a2179-ac85-4c54-a40a-2c0121b338ff)",
"id": "6d302fcf-4de3-45a5-93c0-eb95650e5952",
"image": "cirros (86dded1f-8e0f-4342-906e-8ff9fbd854e2)",
"name": "cirros",
"project_id": "cbba4b1f3cb4460ca63e8ddb87c9b5fb",
"properties": "",
"security_groups": "name='default'",
"status": "SHUTOFF",
"updated": "2020-08-17T13:26:55Z",
"user_id": "b6505d6801e84fb98d77d2461f9719c2",
"volumes_attached": ""
}

Related

Loading CLI output to Cisco Genie/pyats parser?

would like to get some help over here for using Cisco Genie parser. Is it possible to load the output of the CLI command (eg. "show version") into the Genie parser.
My customer pass me the output of "show version" for each of their device. I have no ssh access to their devices for security reason. I'm able to extract the output from a Python script.
But how do I load the CLI output to the Genie parser? Usually what I did is below, but this only applicable if I have ssh connection to the device:
output = device.parse("show version")
So how do I load a output string to the parse and tell it which parser to use?? I'm puzzle...
You can take the following example, here CLI is for "show interface" command:
from genie.libs.parser.ios.show_interface import ShowInterfaces
parser = ShowInterfaces(device= '', context='cli')
parsed_dict = parser.cli(output=str_op)
Here, str_op is the output from CLI command in string format
If you don't have SSH access, I can recommend the TTP module. After adding the CLI output to a notepad, you can write your own template. You can easily parse the data you want. I have given an example below.(show users)
Example Code:
from pprint import pprint
from ttp import ttp
import json
import time
with open("showUsers.txt") as f:
data_to_parse = f.read()
ttp_template = """
<group name="showUsers" method="table">
{{User|re(".?")|re(".*")}} {{Type}} {{Login_Date}} {{Login_Time}} {{Idle_day}} {{Idle_time}} --
{{Session_ID}} {{From}}
</group>
"""
parser = ttp(data=data_to_parse, template=ttp_template)
parser.parse()
# print result in JSON format
results = parser.result(format='json')[0]
print(results)
Example Run:
[
{
"showUsers": [
{
"From": "--",
"Session_ID": "6"
},
{
"Idle_day": "0d",
"Idle_time": "00:00:00",
"Login_Date": "08FEB2022",
"Login_Time": "10:53:29",
"Type": "SSHv2",
"User": "admin"
},
{
"From": "135.244.199.185",
"Session_ID": "132"
},
{
"Idle_day": "0d",
"Idle_time": "00:03:35",
"Login_Date": "09FEB2022",
"Login_Time": "11:32:50",
"Type": "SSHv2",
"User": "admin"
},
{
"From": "10.144.208.82",
"Session_ID": "143"
}
]
}
]

AWS Serverless how to use "sam local start-api" to debug .net core 3.1 applications

I would like to start serverless application locally and then debug it using Visual Studio. I see command line arguments --debug-port, --debugger-path, --debug-args and --debug-function, but no example of how these can be used for .net core.
This is what I'm using for Visual Studio Code. I'm on Windows using dotnetcore3.1.
Firstly, I had to download the Linux vsdbg debug files (yes, Linux, as these files will be mounted in the SAM docker container)
https://vsdebugger.azureedge.net/vsdbg-17-0-10712-2/vsdbg-linux-x64.tar.gz
Unzip them into a folder, e.g. C:\vsdbg
I have a task to launch SAM. My tasks.json looks like:
{
"version": "2.0.0",
"tasks": [{
"label": "sam local api",
"type": "shell",
"command": "sam",
"args": [
"local",
"start-api",
"-d", "5858",
"--template", "${workspaceFolder}/template.yaml",
"--debugger-path", "C:\\vsdbg",
"--warm-containers", "EAGER"
],
}]
}
IMPORTANT:
** --debugger-path points to the linux debug files folder. the sam cli will mount the files for you.
** I had to use --warm-containers EAGER to keep the container from closing after every request
launch.json looks like this:
{
"name": "sam local api attach",
"type": "coreclr",
"processName": "dotnet",
"request": "attach",
"pipeTransport": {
"pipeCwd": "${workspaceFolder}",
"pipeProgram": "powershell",
"pipeArgs": [
"-c",
"docker exec -i $(docker ps -q -f publish=5858) ${debuggerCommand}"
],
"debuggerPath": "/tmp/lambci_debug_files/vsdbg",
"quoteArgs": false
},
"sourceFileMap": {
"/var/task": "${workspaceFolder}"
}
},
This bit: $(docker ps -q -f publish=5858) gets the id of your docker container by filtering on the port that you're using.
This took quite a bit of fiddling to get working, I'm surprised it's not easier or at least some decent documentation on it.

Rsyslog lognormalizer date field parse failure

I am trying to use lognorm/lognormalizer to test my .rb file to use with rsyslog mmnormalize module. My log file looks like this:
2017-08-19T17:00:12.52Z,john,26,engineer
2017-08-19T17:00:12.59Z,susan,28,doctor
My rb file is as follows:
version=2
rule=:%date:date-rfc3164%,%name:word%,%age:number%,%job:word%
When running lognormalizer:
head -2 /home/debian/olas/test.log | /usr/lib/x86_64-linux-gnu/lognorm/lognormalizer -r /home/debian/olas/rule.rb -e json
I get:
{ "originalmsg": "2017-08-19T17:00:12.52Z,john,26,engineer", "unparsed-data": "2017-08-19T17:00:12.52Z,john,26,engineer" }
{ "originalmsg": "2017-08-19T17:00:13.56Z,susan,28,doctor", "unparsed-data": "2017-08-19T17:00:13.56Z,susan,28,doctor" }
This means the rb script is not correct. Does anyone know what am i doing wrong? The date field I guess is not correctly configured, should I insert any other module? I cant find anything on the web. Thank you
You can use this rule:
version=2
rule=:%date:char-to{"extradata":","}%,%name:char-to{"extradata":","}%,%age:number{"format":"number"}%,%job:rest%
which produces the following output using Lognormalizer (pretty printed):
{
"job": "engineer",
"age": 26,
"name": "john",
"date": "2017-08-19T17:00:12.52Z"
},
{
"job": "doctor",
"age": 28,
"name": "susan",
"date": "2017-08-19T17:00:12.59Z"
}
Test command:
lognormalizer -P -H -r my.rule < mylog.log

Upload Image on TryStack Server using Packer tool

I am trying to create and upload an ubuntu based image on trystack server using packer tool. I am using Windows OS to do it. I have created a sample template and loads a script file for setting environment variables using chef. But when I am running the packer build command I get
1 error(s) occurred:
* Get /: unsupported protocol scheme ""
What am I missing in this ??
Here are the template and script files
template.json
{
"builders": [
{
"type": "openstack",
"ssh_username": "root",
"image_name": "sensor-cloud",
"source_image": "66a14661-2dfb-4370-b6d4-87aaefcffdce",
"flavor": "3",
"availability_zone": "nova",
"security_groups": ["mySecurityGroup"]
}
],
"provisioners": [
{
"type": "file",
"source": "sensorCloudCookbook.zip",
"destination": "/tmp/sensorCloudCookbook.zip"
},
{
"type": "shell",
"inline": [
"curl -L https://www.opscode.com/chef/install.sh | bash"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"unzip /tmp/sensorCloudCookbook.zip -d /tmp/sensorCloudCookbook"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"chef-solo -c /tmp/sensorCloudCookbook/solo.rb -l info -L /tmp/sensorCloudLogs.txt"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
}
]
}
openstack-config.sh
#!/bin/bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
# OpenStack API is version 2.0. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://128.136.179.2:5000/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=trystack_tenant_id
export OS_TENANT_NAME="trystack_tenant_name"
export OS_PROJECT_NAME="trystack_project_name"
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="same_as_trystack_tenant_name"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
You need to source openstack-config.sh before packer build.

SublimeREPL Unable to Find R

Okay, this is driving my crazy. I had set this up before, deleted Sublime Text, and now I can't remember what the right configuration was.
Very simple: I'm running R through SublimeREPL and need to point the REPL to where R is installed.
I followed the directions at http://sublimerepl.readthedocs.org/en/latest/, which say to go into the user-defined REPL settings and add this:
{
...
"default_extend_env": {"PATH": "{PATH}:/home/myusername/bin"}
...
}
where the path points to the right directory. I tried replacing it with
{
...
"default_extend_env": {"PATH": "C:/Program Files/R/R-3.0.2/bin"}
...
}
and it's still unable to find R, plus now it's giving me the error:
Error trying to parse settings: Expected value in Packages\User\SublimeREPL.sublime- settings:2:2
I know this is an easy fix. Can anybody point out what I'm doing wrong here?
*I'm using Sublime Text 3. I previously had this working, but on Sublime Text 2.
I've been to http://tomschenkjr.net/using-sublime-text-2-for-r/ and the piece where he mentions "pointing SublimeREPL at R" ... he doesn't include the actual code, as far as I can see
I've also seen this thread Error 2 The system cannot find the file specified in Sublime Text 2, Windows 8, but I had it working before and didn't have to do anything along those lines
Go to Preferences -> Browse Packages... and create a directory tree User/SublimeREPL/config/R. In that directory, create a new file named Main.sublime-menu with the following contents:
[
{
"id": "tools",
"children":
[{
"caption": "SublimeREPL",
"mnemonic": "r",
"id": "SublimeREPL",
"children":
[
{"command": "repl_open",
"caption": "Rterm",
"id": "repl_r",
"mnemonic": "r",
"args": {
"type": "subprocess",
"external_id": "r",
"additional_scopes": ["tex.latex.knitr"],
"encoding": {"windows": "$win_cmd_encoding"},
"soft_quit": "\nquit(save=\"no\")\n",
"cmd": {"windows": ["C:/Program Files/R/R-3.0.2/bin/x64/Rterm.exe", "--ess", "--encoding=$win_cmd_encoding"]},
"cwd": "$file_path",
"extend_env": {"windows": {"PATH": "{PATH}:/C/Program Files/R/R-3.0.2/bin"}},
"cmd_postfix": "\n",
"suppress_echo": {"windows": false},
"syntax": "Packages/R/R.tmLanguage"
}
}
]
}]
}
]
Save the file, and you should now have a Tools -> SublimeREPL -> Rterm menu option. Double-check that the path is the correct one to the Rterm.exe file. On my computer (32-bit XP) it's in the i386 subfolder of bin, so yours may be in bin/x64 or something like that.
I hope this helps, let me know if you still have issues.
I resolved this by adding the location of Rterm.exe to PATH

Resources