Upload Image on TryStack Server using Packer tool - openstack

I am trying to create and upload an ubuntu based image on trystack server using packer tool. I am using Windows OS to do it. I have created a sample template and loads a script file for setting environment variables using chef. But when I am running the packer build command I get
1 error(s) occurred:
* Get /: unsupported protocol scheme ""
What am I missing in this ??
Here are the template and script files
template.json
{
"builders": [
{
"type": "openstack",
"ssh_username": "root",
"image_name": "sensor-cloud",
"source_image": "66a14661-2dfb-4370-b6d4-87aaefcffdce",
"flavor": "3",
"availability_zone": "nova",
"security_groups": ["mySecurityGroup"]
}
],
"provisioners": [
{
"type": "file",
"source": "sensorCloudCookbook.zip",
"destination": "/tmp/sensorCloudCookbook.zip"
},
{
"type": "shell",
"inline": [
"curl -L https://www.opscode.com/chef/install.sh | bash"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"unzip /tmp/sensorCloudCookbook.zip -d /tmp/sensorCloudCookbook"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
},
{
"type": "shell",
"inline": [
"chef-solo -c /tmp/sensorCloudCookbook/solo.rb -l info -L /tmp/sensorCloudLogs.txt"
],
"execute_command": "chmod +x {{ .Path }}; sudo -E {{ .Path }}"
}
]
}
openstack-config.sh
#!/bin/bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
# OpenStack API is version 2.0. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://128.136.179.2:5000/v2.0
# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=trystack_tenant_id
export OS_TENANT_NAME="trystack_tenant_name"
export OS_PROJECT_NAME="trystack_project_name"
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="same_as_trystack_tenant_name"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

You need to source openstack-config.sh before packer build.

Related

How do I get just the STDOUT of a salt state?

my output now
I'm learning salt stack right now and I was wondering if there was a way to get the stdout of a salt state and put it into a document and then send it to the master. Or is there a better way to do this?
To achieve this, we'll have to save the execution of the script in a variable. It will contain a hash containing keys that are showing up under changes:. Then the contents of this variable (stdout) can be written to a file.
{% set script_res = salt['cmd.script']('salt://test.sh') %}
create-stdout-file:
file.managed:
- name: /tmp/script-stdout.txt
- contents: {{ script_res.stdout }}
The output is already going to the master. It would be better to actually output in json and query down to the data you want in your document on the master.
such as the following
Normal output
$ sudo salt salt00\* state.apply tests.test3
salt00.wolfnet.bad4.us:
----------
ID: test_run
Function: cmd.run
Name: echo test
Result: True
Comment: Command "echo test" run
Started: 10:39:51.103057
Duration: 18.281 ms
Changes:
----------
pid:
8661
retcode:
0
stderr:
stdout:
test
Summary for salt00.wolfnet.bad4.us
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 18.281 ms
json output
$ sudo salt salt00\* state.apply tests.test3 --out json
{
"salt00.wolfnet.bad4.us": {
"cmd_|-test_run_|-echo test_|-run": {
"name": "echo test",
"changes": {
"pid": 9057,
"retcode": 0,
"stdout": "test",
"stderr": ""
},
"result": true,
"comment": "Command \"echo test\" run",
"__sls__": "tests.test3",
"__run_num__": 0,
"start_time": "10:40:55.582273",
"duration": 19.374,
"__id__": "test_run"
}
}
}
json parsed down with jq to just the stdout
$ sudo salt salt00\* state.apply tests.test3 --out=json | jq '.|.[]|."cmd_|-test_run_|-echo test_|-run"|.changes.stdout'
"test"
Also, for the record it is considered bad practice to put code that changes the system into jinja. Jinja always runs when a template is rendered and there is no way to control if it happens so just running test=true tests will still run the jinja code that makes changes which could be very harmful to your systems.

AWS Serverless how to use "sam local start-api" to debug .net core 3.1 applications

I would like to start serverless application locally and then debug it using Visual Studio. I see command line arguments --debug-port, --debugger-path, --debug-args and --debug-function, but no example of how these can be used for .net core.
This is what I'm using for Visual Studio Code. I'm on Windows using dotnetcore3.1.
Firstly, I had to download the Linux vsdbg debug files (yes, Linux, as these files will be mounted in the SAM docker container)
https://vsdebugger.azureedge.net/vsdbg-17-0-10712-2/vsdbg-linux-x64.tar.gz
Unzip them into a folder, e.g. C:\vsdbg
I have a task to launch SAM. My tasks.json looks like:
{
"version": "2.0.0",
"tasks": [{
"label": "sam local api",
"type": "shell",
"command": "sam",
"args": [
"local",
"start-api",
"-d", "5858",
"--template", "${workspaceFolder}/template.yaml",
"--debugger-path", "C:\\vsdbg",
"--warm-containers", "EAGER"
],
}]
}
IMPORTANT:
** --debugger-path points to the linux debug files folder. the sam cli will mount the files for you.
** I had to use --warm-containers EAGER to keep the container from closing after every request
launch.json looks like this:
{
"name": "sam local api attach",
"type": "coreclr",
"processName": "dotnet",
"request": "attach",
"pipeTransport": {
"pipeCwd": "${workspaceFolder}",
"pipeProgram": "powershell",
"pipeArgs": [
"-c",
"docker exec -i $(docker ps -q -f publish=5858) ${debuggerCommand}"
],
"debuggerPath": "/tmp/lambci_debug_files/vsdbg",
"quoteArgs": false
},
"sourceFileMap": {
"/var/task": "${workspaceFolder}"
}
},
This bit: $(docker ps -q -f publish=5858) gets the id of your docker container by filtering on the port that you're using.
This took quite a bit of fiddling to get working, I'm surprised it's not easier or at least some decent documentation on it.

Testing Cypress with Browserstack does not work: "Malformed Archive"

I am trying to get a Cypress example test running in Browserstack. I am following this tutorial: Run your Cypress tests
However when it comes to running browserstack-cypress run im getting the following output:
[2020-12-4 17:00:12] - info: Reading config from /home/dennis/Repos/CMS/browserstack.json
[2020-12-4 17:00:12] - info: Reading username from the environment variable BROWSERSTACK_USERNAME
[2020-12-4 17:00:12] - info: Reading access key from the environment variable BROWSERSTACK_ACCESS_KEY
[2020-12-4 17:00:12] - info: browserstack.json file is validated
[2020-12-4 17:00:46] - error: Malformed archive
[2020-12-4 17:00:46] - error: Zip Upload failed.
[2020-12-4 17:00:46] - info: Zip file deleted successfully.
This is what my browserstack.json looks like:
{
"auth": {
"username": "<user name>",
"access_key": "<access key>"
},
"browsers": [
{
"browser": "chrome",
"os": "Windows 10",
"versions": [
"latest",
"latest-1"
]
}
],
"run_settings": {
"cypress_config_file": "./cypress.json",
"project_name": "<project name>",
"build_name": "",
"parallels": "10",
"npm_dependencies": {},
"package_config_options": {}
},
"connection_settings": {
"local": false,
"local_identifier": null
},
"disable_usage_reporting": false
}
The cypress.json file is empty:
{}
What I'm also not getting is where I am defining what tests I want to run and where they are located.
I appreciate any help! Thanks!
I've come across the "Malformed archive" error when the runner tries to compress the entire project and tries to upload it instead of just the Cypress Test files.
You should be able to fix this by moving the Cypress test files into a subfolder:
test
|
| cypress.json
| Browserstack.json
| cypress
|
| fixtures
| integration
| support
| plugins
Set the path to cypress.json in browserstack.json
Refer: https://www.browserstack.com/docs/automate/cypress/sample-tutorial

Running composer dump-env prod through ansible composer module

I am unable to run symfony flex command composer dump-env prod using ansible composer module. I wonder if its even possible ? My task looks sth like this:
- name: Composer dump env for production
composer:
command: dump-env
working_dir: "{{ app_composer_package_dir }}"
arguments: prod
become_user: "{{app_apache_user}}"
become: yes
The error I get is:
"stderr": "\n
\n [Symfony\Component\Console\Exception\CommandNotFoundException]
\n There are no commands defined \"dump-env\".
\n
ansible verbose logs:
fatal: [testhost.com]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"apcu_autoloader": false,
"arguments": "prod",
"classmap_authoritative": false,
"command": "dump-env",
"executable": null,
"global_command": false,
"ignore_platform_reqs": false,
"no_dev": true,
"no_plugins": false,
"no_scripts": false,
"optimize_autoloader": true,
"prefer_dist": false,
"prefer_source": false,
"working_dir": "/var/www/source"
}
},
"msg": "[Symfony\\Component\\Console\\Exception\\CommandNotFoundException] Command \"dump-env\" is not defined. help [--xml] [--format FORMAT] [--raw] [--] [<command_name>]"
}
",
I tried ansible command module to directly run the command but I get same error.
However, I am able run the command by sshing to remote (centos) instance :
sudo -u apache composer dump-env prod
Restricting packages listed in "symfony/symfony" to "4.3.*"
Successfully dumped .env files in .env.local.php
So far I am unable to run composer dump-env prod command using ansible composer module. However following task using ansible command module runs successfully e.g
- name: Composer dump env for production
command: "{{composer_install_path}} --working-dir={{ app_composer_package_dir }} dump-env prod"
become_user: "{{app_apache_user}}"
become: yes
which translates to sth:
sudo -u apache /usr/local/bin/composer --working-dir=/var/www/source dump-env prod

Jfrog artifactory cli

you I am trying to pass two ${key} values in a file spec. Is there any way I can call these two ${key} values through jfrog cli command?
E.g., I have tried the following command
sh "./jfrog rt s --spec compare.spec --spec-vars currentBuild=${currentBuild.number};previousBuild=${currentBuild.previousBuild.number}"
But it is displaying output only for one value.
The command is missing quotes surrounding the spec-vars. So for example with a spec file like
{
"files": [
{
"pattern": "${pat}/",
"target": "${tgt}/"
}
]
}
I need to run the command as
jfrog rt dl --spec otherspec --spec-vars "pat=generic-local;tgt=local"
To make sure that I download the files from the "generic-local" repository to a folder called "local"
If you execute the command with JFROG_CLI_LOG_LEVEL=DEBUG, the output will display both the spec file you provided as well as the resolved file:
$ JFROG_CLI_LOG_LEVEL=DEBUG jfrog rt dl --spec otherspec --spec-vars "pat=generic-local;tgt=local"
[Debug] Replacing variables in the provided File Spec:
{
"files": [
{
"pattern": "${pat}/",
"target": "${tgt}/"
}
]
}
[Debug] Replacing '${pat}' with 'generic-local'
[Debug] Replacing '${tgt}' with 'local'
[Debug] The reformatted File Spec is:
{
"files": [
{
"pattern": "generic-local/",
"target": "local/"
}
]
}
[Info] Searching items to download...

Resources