I'm trying to use horzon web page to build an instance from an ISO image, but it appears to show no disk in centos install configuration. while i choose to create new volume.
the general flow like below:
The test openstack invironment is built with devstack on the latest branch. I upload with a centos iso downloaded from http://mirrors.163.com/centos/7.6.1810/isos/x86_64/, i followed the original workflow provided by horizon. But it seems failed.
webpage send follow data to horizon server
{
"availability_zone":"nova",
"config_drive":false,
"user_data":"",
"disk_config":"AUTO",
"instance_count":1,
"name":"vm_boot_from_iso",
"scheduler_hints":{
},
"security_groups":[
"09f6bebe-3015-438d-8b4c-2d10b5d5998b"
],
"create_volume_default":true,
"hide_create_volume":false,
"source_id":null,
"block_device_mapping_v2":[
{
"source_type":"image",
"destination_type":"volume",
"delete_on_termination":true,
"uuid":"0e49cc7a-593c-4a62-a05f-bdef63157d22",
"boot_index":"0",
"volume_size":10
}
],
"flavor_id":"d2",
"nics":[
{
"net-id":"f9796914-00b5-4dfe-a7fa-4b2c1641d037",
"v4-fixed-ip":""
}
],
"key_name":null
}
I was expecting through the installation i could find the disk provided by the workflow.
Related
I am trying to download the packages of grafana from github releases using our Jfrog Artifactory instance.
The github url is :- https://github.com/grafana/k6/releases/download/v0.42.0/k6-v0.42.0-linux-amd64.tar.gz
Inorder to achieve this , I create a generic typr of remote repo (named as grafana-generic
) in jfrog Artifactory by pointing the remote url to https://github.com.
I referred this solution forn stackoverflow, But it didnt help.
The url which I tried to download the package is as below
https://myrepo/artifactory/grafana-generic/grafana/k6/releases/download/v0.42.0/k6-v0.42.0-linux-amd64.tar.gz
Error is getting as below
{
"errors": [
{
"status": 404,
"message": "Item grafana-generic-cache:grafana/k6/releases/download/v0.42.0/k6-v0.42.0-linux-amd64.tar.gz does not exist"
}
]
}
If you want to fetch a VCS Release you can use the Download a VCS Release REST API command
I was able to build a cordapp using Accounts by following the steps at https://github.com/corda/accounts.
This cordapp was building and running until 03/16/2020, but since 03/20/2020 I am seeing errors in my CI builds because https://ci-artifactory.corda.r3cev.com/artifactory/corda-lib-dev/com/r3/corda/lib/accounts/accounts-contracts/1.0-RC04/accounts-contracts-1.0-RC04.jar cannot be accessed. I get a 409 response now, how can I resolve this?
{
"errors" : [ {
"status" : 409,
"message" : "The repository 'corda-lib-dev' rejected the resolution of an artifact 'corda-lib-dev:com/r3/corda/lib/accounts/accounts-contracts/1.0-RC04/accounts-contracts-1.0-RC04.jar' due to conflict in the snapshot release handling policy."
} ]
}
My build.gradle has
accounts_release_version = '1.0-RC04'
accounts_release_group = 'com.r3.corda.lib.accounts'
confidential_id_release_group = "com.r3.corda.lib.ci"
confidential_id_release_version = "1.0-RC03"
repositories {
maven { url 'http://ci-artifactory.corda.r3cev.com/artifactory/corda-lib-dev' }
maven { url 'http://ci-artifactory.corda.r3cev.com/artifactory/corda-lib' }
}
My local builds on my development environment work fine, I assume because I already have the jars in my .m2
Artifactory configuration has been changed to enforce separation between release and snapshot repositories. corda-lib-dev is a snapshot repo and CorDapp developers should not be developing against these.
Releases and release candidates will be available in corda-lib going forward.
Kindly use corda-lib, and develop again release 1.0.
The pom file with RC04 is set to return 409. I assume Corda's way to disallow use of RC04. May be RC03 is also the same. I just tried this today and saw the repository pom files.
Use "1.0" instead of "1.0-RC03"
In our organization we are running Artifactory Pro edition with daily exports of data to NAS drive (full system export). Every night it is running for around 4 hours and write that "system export was successful". The time has come to migrate our instance to PostgreSQL based (running on derby now). I have read that you need to do it with the full system import.
Few numbers:
Artifacts: almost 1 million
Data size: over 2TB of data
Export data volume: over 5TB of data
If you also were pondering why export data volume is more than 2 times bigger than disk space usage, our guess is that docker images are deduplicated (per layer) when stored in the docker registry, but on export the deduplication is not there.
Also, I had success migrating the instance by rsync'ing the data over to another server and then starting exactly the same setup there. Worked just fine.
When starting exactly the same setup on another machine (clean install) and running system import, it fails with the following log:
[/data/artifactory/logs/artifactory.log] - "errors" : [ {
[/data/artifactory/logs/artifactory.log] - "code" : "INTERNAL_SERVER_ERROR",
[/data/artifactory/logs/artifactory.log] - "message" : "Unable to import access server",
[/data/artifactory/logs/artifactory.log] - "detail" : "File '/root/.jfrog-access/etc/access.bootstrap.json' does not exist"
[/data/artifactory/logs/artifactory.log] - } ]
[/data/artifactory/logs/artifactory.log] - }
Full log is here: https://pastebin.com/ANZBiwHC
The /root/.jfrog-access directory is Access home directory (Access uses derby as well).
What am I missing here?
There are couple of things we were doing wrong according to the Artifactory documentation:
Export is not a proper way to backup big instance. In case of running Artifactory with derby, it is sufficient to rsync filestore and derby directories to NAS.
Incremental export over several versions of Artifactory is NOT supported. Meaning, that if you had full export on version 4.x.x, then you upgraded over to version 5.x.x, then to version 6.x.x and you had incremental exports along the way... Then your export will NOT be imported into version 6.x.x. After each version upgrade it is necessary to create new full export of the instance.
I resolved the situation by removing the export and doing full system export (around 30 hours). Full system export was successfully imported on another instance (around 12 hours).
P.S. The error is still cryptic to me.
Can someone please provide me with a sample file for Packer creating OS image? I have this one:
{
"builders": [{
"type": "openstack",
"ssh_username": "ubuntu",
"tenant_name": "mytenant",
"flavor": "m1.tiny",
"identity_endpoint": "http://1.2.3.4:5000/",
"availability_zone": "az1",
"source_image": "Ubuntu 16.04 With Proxy",
"image_name": "Ubuntu 16.04 With Proxy and Python"
}],
"provisioners": [
{
"type": "shell",
"script": "python.sh"
}
]
}
but OS always returns:
==> openstack: Error launching source server: Invalid request due to incorrect syntax or missing required parameters.
I have no idea what I am missing.
Of course i have correct OS_ env values preset for my Nova API.
You have to use source_image_name or use the ID to reference the image
From the docs:
source_image (string) - The ID or full URL to the base image to use. This is the image that will be used to launch a new server and provision it. Unless you specify completely custom SSH settings, the source image must have cloud-init installed so that the keypair gets assigned properly.
source_image_name (string) - The name of the base image to use. This is an alternative way of providing source_image and only either of them can be specified.
See source_image
As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.