AWS Amplify API does not exist - aws-amplify

I am followng the Automated setup https://docs.amplify.aws/lib/restapi/getting-started/q/platform/js
I have created the Data model using Amplify UI and run
amplify pull --appId XXXX --envName staging
and I got
Successfully generated models. Generated models can be found in /dev/extension/vue-extension/src
Post-pull status:
Current Environment: staging
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------- | --------- | ----------------- |
| Api | my_custom_name | No Change | awscloudformation |
but when I run this code
import Amplify, { API } from 'aws-amplify';
import awsconfig from '#/aws-exports';
Amplify.configure(awsconfig);
API.get('my_custom_name', '/rankings').then(items => console.log(items)).catch(e=> console.log(e ));
I get
API my_custom_name does not exist

You're import awsconfig from '#/aws-exports'; looks incorrect.
Try changing it to import awsconfig from './aws-exports';

Related

Can't validate keystone endpoint when I trying to define an OpenStack cloud for juju

I am trying to define an OpenStack cloud for juju. To do this, I have first deployed Devstack using the following configuration in the local.conf file:
$ cat local.conf | grep -v "#" | grep -v "^$"
[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
HOST_IP=172.29.21.181
FLOATING_RANGE=172.29.20.1/22
Q_FLOATING_ALLOCATION_POOL=start=172.29.21.182,end=172.29.21.184
PUBLIC_NETWORK_GATEWAY=172.29.21.181
ENABLED_SERVICES+=,tls-proxy
ENABLED_SERVICES+=,g-api,g-reg
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data
After a successful deployment, these are the endpoints:
$ openstack endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| 0b489b8a683d4be489448230437e39ca | RegionOne | cinder | block-storage | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
| 0b9e96cfe0b440b781171ac0b082de3a | RegionOne | keystone | identity | True | admin | https://172.29.21.181/identity |
| 29ce5b2061dd474492f3aebda164acd0 | RegionOne | cinderv2 | volumev2 | True | public | https://172.29.21.181/volume/v2/$(project_id)s |
| 45e10e75eb6848f5a934674373962e11 | RegionOne | glance | image | True | public | https://172.29.21.181/image |
| 8c35460b8c0d4c21ac9b7dd27bc92c48 | RegionOne | keystone | identity | True | public | https://172.29.21.181/identity |
| af451150c3094497936fd6877380d877 | RegionOne | placement | placement | True | public | https://172.29.21.181/placement |
| b3907f627f684ada8526b89c2c9683f9 | RegionOne | neutron | network | True | public | https://172.29.21.181:9696/ |
| c642b07700b54be39e1dd537e8c0f8be | RegionOne | nova | compute | True | public | https://172.29.21.181/compute/v2.1 |
| dbb94215bc89457383a390a0490a89f6 | RegionOne | nova_legacy | compute_legacy | True | public | https://172.29.21.181/compute/v2/$(project_id)s |
| e1037ed336d541b080e365caa0020e78 | RegionOne | cinderv3 | volumev3 | True | public | https://172.29.21.181/volume/v3/$(project_id)s |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
But when I try to add the cloud to juju using the "juju add-cloud" command (I am following the indications of this link: https://juju.is/docs/olm/openstack) I get the following error:
$ juju add-cloud openstack
This operation can be applied to both a copy on this client and to the one on a controller.
No current controller was detected and there are no registered controllers on this client: either bootstrap one or register one.
Cloud Types
lxd
maas
manual
openstack
vsphere
Select cloud type: openstack
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181/identity/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181/identity/v3
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: http://172.29.21.181/identity
Can't validate endpoint: No Openstack server running at http://172.29.21.181/identity
Enter the API endpoint url for the cloud [https://172.29.21.181/identity]: https://172.29.21.181:5000/v3
Can't validate endpoint: No Openstack server running at https://172.29.21.181:5000/v3
I can curl the url:
$ curl https://172.29.21.181/identity
{"versions": {"values": [{"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "https://172.29.21.181/identity/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
And I can connect to the port where Keystone is listening:
$ nc -vz 172.29.21.181 5000
Connection to 172.29.21.181 5000 port [tcp/*] succeeded!
I set no_proxy=127.0.0.1,localhost,172.29.21.181 and NO_PROXY=127.0.0.1,localhost,172.29.21.181
as environment variables, because searching for solutions on the Internet I understood that maybe it could solve my problem. But it didn't work.
Apart from this cloud I have another one deployed through Openstack-Ansible. In this cloud I have not encountered this error, the only difference I see is that the url is https://{HOST_IP}:5000/v3.
If anyone has any ideas it would be very helpful, thank you.
I have found a way to bypass this error, but I don’t know exactly why. I have modified the OS_AUTH_URL environment variable to end in “/v3”:
$ unset OS_AUTH_URL
$ export OS_AUTH_URL=https://172.29.21.181/identity/v3
Now, after using it as suggested value when running “juju add-cloud”, I don’t get the error when running “juju bootstrap”. I guess when you enter the url manually, juju checks the validity of it and fails for some code reason maybe. Having skipped that check, I guess the “juju bootstrap” command will directly use the url ending in “/v3” which is correct and works.
Now I get the following error:
$ juju bootstrap openstack --verbose
Adding contents of "/opt/stack/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys
Creating Juju controller "openstack-regionone" on openstack/RegionOne
Loading image metadata
ERROR failed to bootstrap model: no image metadata found
But I guess I just have to add Swift to my deployment and follow the instructions in this link: https://juju.is/docs/olm/cloud-image-metadata

OpenStack Mistral workflow error while executing using GUI

I am getting error while executing OpenStack simple mistral workflow on OpenStack(wallaby) devstack environment. While I can execute the workflow from CLI command and got success But it fails if I try the same thing with GUI
root#openstack:~# openstack workflow definition show test_get
---
version: '2.0'
test_get:
description: Test Get.
tasks:
my_task:
action: std.http
input:
url: http://www.google.com
root#openstack:~# openstack workflow execution create test_get
+--------------------+--------------------------------------+
| Field | Value |
+--------------------+--------------------------------------+
| ID | 482e3803-45ef-411e-a0f4-1427abfc8649 |
| Workflow ID | 9dc0d4a4-8c5b-4288-8126-e1147da3bd02 |
| Workflow name | test_get |
| Workflow namespace | |
| Description | |
| Task Execution ID | <none> |
| Root Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2021-06-21 16:58:54 |
| Updated at | 2021-06-21 16:58:54 |
| Duration | ... |
+--------------------+--------------------------------------+
But while executing in GUI I get **
Execution is missing field "workflow_identifier"
**
Faced the same issue in Yoga release. Spent a few hours to investigate it and found interesting thing:
/usr/local/lib/python3.8/dist-packages/mistralclient/api/v2/executions.py
class ExecutionManager(base.ResourceManager):
resource_class = Execution
def create(self, wf_identifier='', namespace='',
workflow_input=None, description='', source_execution_id=None,
**params):
self._ensure_not_empty(
workflow_identifier=wf_identifier or source_execution_id
)
But! in the webform we are using workflow_identifier instead of wf_identifier
/usr/local/lib/python3.8/dist-packages/mistraldashboard/workflows/forms.py
def handle(self, request, data):
try:
data['workflow_identifier'] = data.pop('workflow_name')
data['workflow_input'] = {}
for param in self.workflow_parameters:
value = data.pop(param)
if value == "":
value = None
data['workflow_input'][param] = value
ex = api.execution_create(request, **data)
FIX is to rename workflow_identifier to wf_identifier in the form like
data['wf_identifier'] = data.pop('workflow_name')
After that mistral-dashboard works fine with execution creating.

kusto function to parse json which is number

i could not able to parse the below json value , I tried with parse_json() and todynamic() ,I m getting the result column values to be empty
]1
the issue is that your payload includes an internal invalid JSON payload.
it is possible to "fix" it using the query language (see usages of replace() in the example below), however it'd be best if you can write a valid JSON payload to begin with.
try running this:
print s = #'{"pipelineId":"63dfc1f6-5a43-5bca-bffe-6a36a435e19d","vmId":"9252382a-814f-4d02-9b1b-305db4caa208/usl-exepipe-dev/westus/usl-exepipe-lab-dev/asuvp306563","artifactResult":{"Id":"execution-job-2","SourceName":"USL Repository","ArtifactName":"install-lcu","Status":"Succeeded","Parameters":null,"Log":"[{\"code\":\"ComponentStatus/StdOut/succeeded\",\"level\":\"Info\",\"displayStatus\":\"Provisioning succeeded\",\"message\":\"2020-06-02T14:33:04.711Z | I | Starting artifact ''install-lcu''\r\n2020-06-02T14:33:04.867Z | I | Starting Installation\r\n2020-06-02T14:33:04.899Z | I | C:\\USL\\LCU\\4556803.msu Exists.\r\n2020-06-02T14:33:04.914Z | I | Starting installation process ''C:\\USL\\LCU\\4556803.msu /quiet /norestart''\r\n2020-06-02T14:43:14.169Z | I | Process completed with exit code ''3010''\r\n2020-06-02T14:43:14.200Z | I | Need to restart computer after hotfix 4556803 installation\r\n2020-06-02T14:43:14.200Z | I | Finished Installation\r\n2020-06-02T14:43:14.200Z | I | Artifact ''install-lcu'' succeeded\r\n\",\"time\":null},{\"code\":\"ComponentStatus/StdErr/succeeded\",\"level\":\"Info\",\"displayStatus\":\"Provisioning succeeded\",\"message\":\"\",\"time\":null}]","DeploymentLog":null,"StartTime":"2020-06-02T14:32:40.9882134Z","ExecutionTime":"00:11:21.2468597","BSODCount":0},"attempt":1,"instanceId":"a301aaa0c2394e76832867bfeec04b5d:0","parentInstanceId":"78d0b036a5c548ecaafc5e47dcc76ee4:2","eventName":"Artifact Result"}'
| mv-expand log = parse_json(replace("\r\n", " ", replace(#"\\", #"\\\\", tostring(parse_json(tostring(parse_json(s).artifactResult)).Log))))
| project log.code, log.level, log.displayStatus, log.message

Route edit,show and delete not working using laravel 5.7

Page not found all I get when trying to get these pages.
web.php looks like this
Route::resource('admin/roles', 'RoleController');
route:list look like this
| GET|HEAD | admin/roles | index | App\Http\Controllers\RoleController#index
| GET|HEAD | admin/roles/create | create | App\Http\Controllers\RoleController#create
| PUT|PATCH | admin/roles/{} | update | App\Http\Controllers\RoleController#update
| GET|HEAD | admin/roles/{} | show | App\Http\Controllers\RoleController#show
| DELETE | admin/roles/{} | destroy | App\Http\Controllers\RoleController#destroy
| GET|HEAD | admin/roles/{}/edit | edit | App\Http\Controllers\RoleController#edit
Controllerlook like this
public function show($id)
{
$role = Role::find($id);
return view('admin.roles/show')->with('role',$role);
}
public function edit($id)
{
$role = Role::find($id);
return view('admin.roles.edit')->with('role',$role);
}
You should try navigating your browser to admin/roles/1 instead of admin/roles/show/1. The route you've tried doesn't exist so you're correctly getting a 404 error.
Since the create and index page work fine, but not show I think there is something up with the route bindings.
Since the routes in route:list command show admin/roles/{}, it makes me think laravel couldn't figure out the bindings.
My best guess for this it to check the http kernel. You should have the \Illuminate\Routing\Middleware\SubstituteBindings::class middleware either in the $middleware array or inside the web group under the $middlewareGroups array. I suggest putting it in the web middleware group.
eg:
protected $middlewareGroups = [
//
'web' => [
\Illuminate\Routing\Middleware\SubstituteBindings::class,
],
//
];
Another suggestion:
Try defining the routes individually instead of using Route::resource()
Route::get('admin/roles', 'RoleController#index');
Route::get('admin/roles/create', 'RoleController#create');
Route::patch('admin/roles/{role}', 'RoleController#update');
Route::get('admin/roles/{role}', 'RoleController#show');
Route::delete('admin/roles/{role}', 'RoleController#delete');
Route::get('admin/roles/{role}/edit', 'RoleController#edit');
Note you may need to add ->name('some-name') to fix the names

Keystone client project list does not display all the projects

I am trying to list all the keystone projects present on my setup. The snippet i am using displays only few of them.
CODE-1:
from keystoneclient.auth.identity import v3
from keystoneclient import session
from keystoneclient.v3 import client as ksclient3
auth_url = "http://192.16.66.10:5000/v3"
token = '0112efcb75e9411b965b423edb321827'
auth = v3.Token(auth_url=auth_url, token=token, unscoped=True)
sess = session.Session(auth=auth)
ks = ksclient3.Client(session=sess);
project_list = [t.name for t in ks.projects.list(user=sess.get_user_id())]
print project_list
OUTPUT
[A', B', C']
CODE-2
from keystoneclient import session
from keystoneclient.v3 import client
from keystoneclient.auth.identity import v3
auth = v3.Password(auth_url='http://127.0.0.1:5000/v3',user_id='idm',password='idm',project_id='2545070293684905b9623095768b019d')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.users.list()
OUTPUT
keystoneclient.exceptions.Unauthorized: The request you have made requires authentication. (HTTP 401)
EXPECTED OUTPUT
openstack project list
+----------------------------------+----------------+
| ID | Name |
+----------------------------------+----------------+
| 3efabc809570458180b2e20ce099ef1a | A |
| 546636e4532246f9a440e44deaad82d6 | B |
| 63494b0b0e164e7e82281c94efc709e4 | C |
| 71dbcec67a3e49979a9a9f519409785d | D |
| 8699a715c6834ac1a42350e593879695 | E |
| af88b7d76ab44e13ba73b80b39d2644b | F |
| b431f905a52448298980a0fe0b7751be | G |
| ba3053eb5c534052914f133aa065865d | H |
+----------------------------------+----------------+
Things i want to understand:
Why CODE-1 displays few of the them from the list
Why CODE-2 fails
How to get the keystone project IDS from keystone client
Why CODE-1 displays few of the them from the list
Your code does filter the tenants, if you like to see all tenants list do not filter them like this:
ks.projects.list()
Your filter "user=sess.get_user_id()" returns all tenants that was created by current user.
Why CODE-2 fails
I suppose the error is in args, you give user_id='idm', if you use user name, then argument should be username='idm', if you pass in arg user_id, then you need to pass user id, eg user_id='56d88dd0a3ab4c4c8d1d15534352d7de'
You can take id from horizon
http://localhost/horizon/identity/users/
In source code there are example of client creation:
from keystoneauth1.identity import v3
from keystoneauth1 import session
from keystoneclient.v3 import client
auth = v3.Password(user_domain_name=DOMAIN_NAME,
username=USER,
password=PASS,
project_domain_name=PROJECT_DOMAIN_NAME,
project_name=PROJECT_NAME,
auth_url=KEYSTONE_URL)
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.projects.list()
user = keystone.users.get(USER_ID)
user.delete()
How to get the keystone project IDS from keystone client
If you like to see all tenants ids(suppose admin credentials)
project_list = [proj.id for proj in ks.projects.list(all_tenants=True)]

Resources