I create Fedora instance in horizon by giving public key. But i didn't get any user and password to ssh the instance. Also tried to create instance from shell by running this,
nova boot --config-drive=true --flavor 3 --key-name testkey --image be1437b9-b7b4-4e56-a2c3-f92cdd0848ce --user-data cloud-config.txt test
Instance launched successfully in both case and when i try to login with root it ask me for password.
So please tell me what is the exact way to create a fedora instance in Openstack and what would be its user and password for ssh.
Just to confirm, I suppose that you have the corresponding .pem file for the keyname that you create (testkey) and this file has the appropriate permissions to be used to access using ssh. I mean chmod 600 of the .pem file.
If this is the case, you should go into the instance only executing the following sentence:
ssh -i testkey.pem root#<IP address>
Have you installed cloud-init package from epel repository?
So, you can get into the server using 'fedora' or 'cloud-user' user account.
http://docs.openstack.org/image-guide/content/ch_obtaining_images.html
Let leave cloud-init option in nova boot, I have also tried this one,
nova boot --flavor 3 --key-name testkey --image be1437b9-b7b4-4e56-a2c3-f92cdd0848ce test
In this command Instance launches successfully, but still I can't ssh the instance.
Where as now when I create instance from horizon I do ssh in that instance easily.
For the first time login it is recommended that you generate a key-pair (In ubuntu, https://help.ubuntu.com/community/SSH/OpenSSH/Keys) and inject into the image (http://docs.openstack.org/grizzly/basic-install/yum/content/basic-install_operate.html) and do SSH to the instance using the key-pair. Once you are logged in, you can create a user and using this user you can login through VNC console.
Related
I installed and configured octavia for openstack load balancing. but when i want create a new loadbalancer using openstack loadbalancer create --name lb1 --vip-subnet-id subnet-pub octavia worker log say: ERROR octavia.controller.worker.v1.controller_worker octavia.common.exceptions.ComputeBuildException: Failed to build compute instance due to: Failed to retrieve image with amphora tag.
why? (I use ubuntu)
another question is: I installed octavia on controller node. must install anything on compute node(s)?
I had a similar problem and adding --project service solved it when uploading the image.
$ openstack image create amphora-x64-haproxy.qcow2 --container-format bare --disk-format qcow2 --private --tag amphora --file amphora-x64-haproxy.qcow2 --property hw_architecture='x86_64' --property hw_rng_model=virtio --project service
About the second question no need for anything to be installed on compute nodes. Only network access to lb-mgmt-net from controllers.
This Link helped me.
Set tag of image with value "amphora"
openstack image set --tag "amphora" image_name
I'm working with the docker image "mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019". I've noticed that the default user for windowsservercore is ContainerAdministrator. If I try to run the image with the user ContainerUser (docker run -u ContainerUser mcr.microsoft.com/dotnet/framework/runtime:4.8-windowsservercore-ltsc2019) I get the following error: ERROR: Failed to stop or query status of service 'w3svc' error [80070005].
I think that the error is related to the permissions that the user needs to run ServiceMonitor. So, first of all, is it correct to assume that windowsservercore images must run with ContainerAdministrator and cannot run with ContainerUser?
If the assumption above is correct I would like to confirm if running the container with ContainerAdministrator can expose the container to a security issue. As far as I understand even if the ServiceMonitor.exe is started with ContainerAdministrator the external-facing process is the IIS Windows service, which runs under a local account in IIS_IUSRS group. So even if an attacker could compromise the application it will not have administrator access to the container. Can anyone confirm if this is correct?
ContainerAdministrator is a special virtual account.ContainerAdministrator is the default account when you run a container – so if your CMD instruction starts a console app, that app will run as ContainerAdministrator. If your app runs in the background as a Windows Service, then the account will be the service account, so ASP.NET apps run under application pool accounts.
you could refer to the below link:
Accessing the Docker host pipe inside windows container with non-admin user
I was in the same position you are. I can't confirm your assumption (though I assume the same). But I can provide our dockerfile which enabled us to run as non root (to comply with an AKS policy).
dockerfile
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2019
SHELL ["cmd", "/S", "/C"]
# username = '1000' so the k8s policy can verify it's a non-root user
RUN net user /add 1000
# We copy some config files in the actual startup.ps1 so we need write access here
RUN icacls C:\inetpub\wwwroot\ /grant 1000:(OI)(CI)F /t
# ServiceMonitor.exe puts some environment variables in the applicationHost.config
RUN icacls C:\Windows\System32\inetsrv\Config\ /grant 1000:(OI)(CI)F /t /c
# S-1-5-32-545 is group Builtin\Users which contains user 1000. Allows user to restart the w3svc service
RUN sc.exe sdset w3svc D:(A;;CCLCSWRPWPDTLOCRRC;;;SY)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA) (A;;CCLCSWLOCRRC;;;IU)(A;;CCLCSWLOCRRC;;;SU)(A;;RPWPDTLO;;;S-1-5-32-545)
COPY startup.ps1 /
WORKDIR /inetpub/wwwroot
ARG source=obj/Docker/publish
COPY ${source} .
USER 1000
ENTRYPOINT ["powershell", "/startup.ps1"]
startup.ps1
# ContainerAdministrators doesn't have these variables, but the
# custom account does have them. If this gets put into the applicationHost.config
# iis will try to write to the user specific temp directory, and fail (with an unrelated error)
Remove-Item env:TMP
Remove-Item env:TEMP
C:/ServiceMonitor.exe w3svc
I created Kaa sandbox instance on the AWS Linux host. I am getting some of the issues
Still I am not able to see the management button on the kaa Sandbox console.
I am not able to connect AWS with using ssh. I followed all the required step to connect to AWS Linux host, but not lucky to connect.
My problem is that, I would like to change the host IP in the sandbox setting with my AWS linux host IP, so that my end point device gets connected to host,
Still I am struggling with above points. Please advise.
Regards,
Prasad
That seems to be an issue with the Kaa 0.10.0 Sandbox for AWS. We created a bug for tracking this.
For now, you can use the next workaround:
echo "sudo sed -Ei 's/(gui_change_host_enabled=).*$/\1true/'" \
"/usr/lib/kaa-sandbox/conf/sandbox-server.properties;" \
"sudo service kaa-sandbox restart" | \
ssh -i <your-private-aws-instance-key.pem> ubuntu#<your-aws-instance-host>
Note: this is a multi-line single command that works correctly in bash (should also work in sh and others, but that is not tested).
Note 2: don't forget to replace
<your-private-aws-instance-key.pem>
<your-aws-instance-host>
with the respective key name and host name/IP address.
I am new to salt-ssh and I have gotten it to work successfully for setting up a remote system. However, I have a login issue that I don't know how to address. What is happening is that when I try to run the salt-ssh commands I have to fight with then initial login process before eventually it just works. I am looking to see if I can narrow down what is causing me to have to fight with login process.
I am using OS X to run my salt-ssh commands against an ubuntu vagrant vm.
I have added my root user's ssh key to the root user authorized_keys on the vagrant vm. I have verified that I can log into the system using ssh without any issues
sudo ssh root#192.168.33.10
Here are what my config files look like:
roster
managed:
host: 192.168.33.10
user: root
sudo: true
Saltfile
salt-ssh:
config_dir: /users/vmcilwain/projects/salt-ssh-rails
roster_file: /users/vmcilwain/projects/salt-ssh-rails/roster
log_file: /users/vmcilwain/projects/salt-ssh-rails/saltlog.txt
master
file_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/states
pillar_roots:
base:
- /users/vmcilwain/projects/salt-ssh-rails/pillars
I run this command:
sudo salt-ssh -i '*' test.ping
I enter my local user's password and I get this output
Permission denied for host 192.168.33.10, do you want to deploy the salt-ssh key? (password required):
[Y/n]
This is where my fight is. If the vagrant vm has the ssh key for the user I am executing salt-ssh as, why am I being told that permission is denied? Especially when I verified I could ssh into the system without using salt-ssh.
Clicking yes prompts me for the remote root user's password, which I didn't set and don't necessarily want to since an ssh key should have worked.
I'm hoping someone can tell me the best way to setup connections between both systems so that I don't have to have this fight every time.
I needed to set the priv in my roster to the rsa key that I am using to connect to the remote host:
priv: /Users/vmcilwain/.ssh/id_rsa
I have a Fuse ESB standalone server running in a RHEL box. I want to connect to the Karaf console remotely to manage the bundles.
If I close my current session, How I go back to my karaf console again ?
I have my Fuse ESB configured to 8101 port for SSH. Will I be able to connect it directly through my SSH client(Putty)
Or Do I need another fuse esb instance locally to access the remote Fuse instance ?
Either ways I am not able to connect, It says access denied. Is there any other easier way to connect to remote fuse/karaf instance ?
Even I tried using Client.sh from bin directory, it says authentication failure. But I have created a JAAS user with Admin role.
By the way, Is just a user is enough to do this ? Or does it need Public/Private key configuration also ?
What is the usual approach for managing the remote Fuse/Karaf instance ?
You can find many details in the JBoss Fuse documentation (eg successor to Fuse ESB) at
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/
And there is a chapter on remote connecting to containers here
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html-single/Configuring_and_Running_JBoss_Fuse/index.html#ESBRuntimeRemote
You need to pass in credentials for a user on the container that is valid and is in the admin role.
The karaf shell also has a jaas command, which allows you to list the users and their roles etc. And as well add new users, etc. You can also do some user management form the FMC web console that is part of Fuse ESB.
You might also want to check your IPtables
http://ask.xmodulo.com/open-port-firewall-centos-rhel.html.
- $ sudo iptables -I INPUT -p tcp -m tcp --dport 8101 -j ACCEPT
- $ sudo service iptables save
- $ service iptables restart
From another karaf instance you can run this command
JBossFuse:karaf#root> ssh -l username -P password -p port hostname
e.g
- JBossFuse:karaf#root> ssh -l smx-P smx -p 8101 10.234.12.12
You have to make sure that the ssh role name that is defined in etc/org.apache.karaf.shell.cfg
# shRole defines the role required to access the console through ssh
#
sshRole = ssh
matches the one in etc/user.properties
#
# This file contains the users, groups, and roles.
# Each line has to be of the format:
#
# USER=PASSWORD,ROLE1,ROLE2,...
# USER=PASSWORD,_g_:GROUP,...
# _g_\:GROUP=ROLE1,ROLE2,...
#
# All users, grousp, and roles entered in this file are available after Karaf startup
# and modifiable via the JAAS command group. These users reside in a JAAS domain
# with the name "karaf".
#
karaf = karaf,_g_:admingroup
_g_\:admingroup = group,admin,manager,viewer,ssh