During vm creation in openstack, one can specify a keypair name, so that the specified public key get injected to the newly created vm.
I would like to know in which state of machine the key injection is done, completely? Given the machine is in ACTIVE state, does that guarantee that the key injection is completed?
Details:
I have a limited quota for the key pairs and I would like to delete each keypair from openstack immediately after they get injected to the target machine. I have only access to openstack ReST API and NOT to the target vm.
UPDATE
Looking at nova instances table, I can see that "key name" and "key data" are existing there too. I think the key is copied to this table and then the original key is not referenced any more. So deleting the key shouldn't cause any issue. am I wrong?
What you can do is try a ssh connection and once that succeeds, proceed to delete the keypair.
To answer your question directly, the key is added via the cloud-init. You can grep for ssh in /var/log/cloud-init.log to see when exactly it happens. (It happens pretty early in the cloud-init process).
I don't think there is any API way of figuring out when exactly the key injection happens. Machine in ACTIVE state is not a guarantee that cloud-init part of key injection is done (though for practical purposes, it does happen pretty early).
You could try checking it via nova console-log. Though the output of console-log has limited buffer, so it may overshoot the key addition part and hence you may not see it in console log.
So, I think checking via actual ssh connection is the only sure shot way.
Related
I have 15 instances running with same security group, however I can SSH into some of them but not the others. I received "Permission denied (publickey)" message for those instances. I also confirm that all instances are using the same public key and I try to ssh into all of them with the same private key.
What do I miss?
Thank you for helping out!
If you are getting "Permission denied (publickey)", it is not a security group issue. It is most likely one of the following:
You didn't specify the public key to use when launching some of the instances.
There was a problem with the metadata service on some of the instances which meant that cloud-init was unable to retrieve the public key.
You are using the wrong credentials; e.g. the admin account name is different on the different instances. (The default is OS dependent.)
You have multiple keys in your ~/.ssh directory and they are being tried in the wrong order. If you have fail2ban set up on the server side, each time that the client supplies a key counts as a login attempt. You can hit the limit before you tried the key that is going to work.
If you look at the respective instance's console log from their first boot, you can see which public keys were actually used. This can be used to diagnose 1 and 2.
For 3, check the OS documentation.
For 4, try using the ssh command's -i option to specify the path to the private key file.
There are other possibilities; e.g. if you launched instances from a non-pristine image / snaphot.
Reference:
Troubleshooting SSH access to a NeCTAR instance
My place of business currently uses Reflection for Unix and OpenVMS to handle a database of customers. I access this database directly through the Reflection emulation. The only way to get data out of Reflection is to navigate to a single customer via keyboard input and print the information to a .txt.
Is there anyway I can access the VM other than through Reflection with the end goal of automating retrieval of customer information from a Java script executed outside of the Reflection environment? This is the information I can gather via the Reflection interface about what I am connecting to:
At the bottom of the Reflection interface - VT500-7 -- HOST_NAME via SECURE SHELL
Via the Connection Setup drop-down:
Host name: HOST_NAME
SSH config scheme: AutoKeyLogin
User name: username
Via the Security... button:
General tab:
Port number: 22
User Authentication: [x] Public Key
[x] Password
User Keys tab:
Use Name Type Location
[x] username1user DSA C:\Documents\PathToSSHKey\.ssh
Host Keys tab:
Host Type Fingerprint
HOST_NAME, 111.1.111.11, 22 DSA 39:14:f3:123:fds:restOfFingerprint
There is more information available if the solution is possible but I have just not provided enough to solve it, so please ask.
Given that I have the host name, port, .ssh, and host key, is it possible to connect to and read from the VM that I am otherwise connecting to normally via the Reflection emulator?
NO. Reflection (other example is PuTTY) is just a dumb-terminal emulator, here using the (secure) SSH protocol to connect to some Operating System. From the information provided we cannot even tell which OS. Maybe OpenVMS maybe some Unix. Most certainly not a 'VM', but a physical box. Maybe a Alpha, Integrity, Sun, IBM or Intel server.
IF, big if, it is OpenVMS you would possibly see something like this flash by on entry:
XXX - HP rx2600 (1.50GHz/6.0MB) OpenVMS IA64 V8.3-1H1
Last interactive login on Thursday, 7-DEC-2017 13:23:19.83
Last non-interactive login on Wednesday, 6-DEC-2017 12:35:45.80
Most likely username uses is set up to always start a (shell) script which starts a menu from which a program is activated, which knows how to access data record. IF is it OpenVMS then the actual data is likely stored in RMS (indexed) files, but it could in a proper (Oracle RDB or RDBMS) database.
If bulk access to the data is needed then you need to talk to the system/application manager for the system 'HOST_NAME' and ask them about the application and its database.
You may find that there is FTP, ODBC or JDBC or natice DB (OCI?) access to the data avaiable already, or that this can be requested. Likely tools in this space are ConnX, Attunity Connect, and such.
First you'll need to find out which OS/Platform/Version, which application (3rd party? home grown? 4GL? Cobol? Basic? and ultimately, which database/storage method.
That's not to say that some terminal emulator cannot be 'tricked' (google -
screen scraping) to be programmed to fetch a series of data on command, but that will always be error prone and laboriously for limited volumes.
You are better of trying to get proper data access.
Good luck! You'll need some.
Hein
I support an application who call a CMD line to decrypt a file.
The application is a .exe file that is called by the Windows Task Scheduler and is execute as the same user who have all right.
The application run every week day in the evening at 6h30pm and sometimes the CMD line return the message: no secret key.
The application failed because the file was not decrypted. But it doesn't failed every evening, just random evening. It looks totally random.
And if I run the application myself after it failed with the same user, it worked.
The secret key is imported in Kleopatra and it work fine with other application that run in the morning. And it work fine when I used it.
What can cause this?
Thank you
We fix the problem. We must not log off the application user.
If we log off the user, one key is not working, but the others are working.
Some ideas to help you run down the problem:
Check the private keys available to the machine on which the application fails
gpg --list-secret-keys
(IIRC Kleopatra runs on top of GnuPG, so I assume your application does as well. I've been wrong before.) You might notice something out of place with your private (decryption) keys. For example, if the key is listed as either
sec#
ssb>
Then it's a (primary or sub respectively) key located on a smart card for storage. If the card, for whatever reason, isn't in the machine when the app runs it'll fail to decrypt.
Check the disk containing the private keyring is attached/inserted/mounted at the time the application ran and failed to decrypt. If the keys are stored on removable (or unreliable) media then that could also result in a failure to decrypt.
Check that the item failing to decrypt was encrypted properly. If there is some secondary recipient necessary for the app to run there may be a required key that you don't know about (I gather from your post you didn't create this app, just maintain it.) It may even be that the app is trying to decrypt a different file erroneously, but that kind of thing can only be found out by stepping through your source code and resident files.
Failing those, pray for #Jens Erat to notice your question.
On a RedHat 6 server, a third party application requires to be root to run and needs access to sqlplus. I have a running database, I can run sqlplus as user 'oracle'. When logged in as user root, 'sqlplus usr/pwd#dbname' works as expected. The trouble is that this agent needs to run sqlplus with no parameters and it always returns ORA-12546: TNS:permission denied.
I've read a dozen times that enabling root to launch Oracle is a security issue but I really have no other choice.
Running Oracle 11.2.0.1.0.
Any help will be much appreciated as I've googled for 2 days with no success.
From the documentation, ORA_12546 is:
ORA-12546: TNS:permission denied
Cause: User has insufficient privileges to perform the requested operation.
Action: Acquire necessary privileges and try again.
Which isn't entirely helpful, but various forum and blog posts (way too many to link to, Googling for the error shows a lot of similar advice) mention permissions on a particular part of the installation, $ORACLE_HOME/bin/oracle, which is a crucial and central part of most of the services.
Normally the permissions on that file would be -rws-r-s--x, with the file owned by oracle:dba, and this error can occur when the word-writable flag - the final x in that pattern - is not set. Anyone in the dba group will still be able to execute it, but those outside will not.
Your listener seems to be fine as you can connect remotely, by specifying #dbname in the connect string. The listener runs as oracle (usually, could be grid with HA, RAC or ASM) so it is in the dba group and can happily hand-off connections to an instance of the oracle executable.
When you connect without going via the listener, you have to be able to execute that file yourself. It appears that root cannot execute it (or possibly some other file, but this is usually the culprit, apparently), which implies the world-writable bit is indeed not set.
As far as I can see you have three options:
set the world-writable bit, with chmod o+x $ORACLE_HOME/bin/oracle; but that opens up the permissions for everyone, and presumably they've been restricted for a reason;
add root to the dba group, via usermod or in the /etc/group; which potentially weakens security as well;
use SQL*Net even when you don't specify #dbname in the connect string, by adding export TWO_TASK=dbname to the root environment.
You said you don't have this problem on another server, and that the file permissions are the same; in which case root might be in the dba group on that box. But I think the third option seems the simplest and safest. There is a fourth option I suppose, to install a separate instant client, but you'd have to set TWO_TASK anyway and go over SQL*Net, and you've already ruled that out.
I won't dwell on whether it's a good idea to run sqlplus (or indeed the application that needs it) as root, but will just mention that you'd could potentially have a script or function called sqlplus that switches to a less privileged account via su to run the real executable, and that might be transparent to the application. Unless you switch to the oracle account though, which is also not a good idea, you'd have the same permission issue and options.
I am getting this error while connecting to IBM MQ. I know that this is because of privileges, but is there any way just to check the connection with IBM MQ?
Please suggest.
The 2035 suggests that your connection is getting to the QMgr. If you had the wrong channel name, host or port you would get back a 2059. The 2035 means that the connection made it to the listener, found a channel of the name that was requested and attempted a connection.
If you want to test past this point it will be necessary to either authorize the ID that you are using to connect or to put an authorized ID in the MCAUSER attribute of the channel.
For a detailed explanation of how the WMQ security works on client channels, see the WMQ Base Hardening presentation at http://t-rob.net/links.
If you enable authorization messages then the 2035 will show up in the event queue. Then you can look at the message and see what ID was used to connect and what options were used too. The 2035 might be because you asked for set authority on the queue manager or something else you aren't supposed to have. The authorization messages wil show you that.
You can also resolve this By setting mcauser('mqm') .. i was able to overcome 2035 error.
Define channel (channel1) chltype (svrconn) trptype (tcp) mcauser(‘mqm’)
Esp thanx to my SENIOR Bilal Ahmad (PSE)
You have to check the privileges with an MQ administrator.
You can use dspmqaut to check the grant.
Below is the sample to give user poc access to Queue Manager QM1 and Queue LQ1
# check the access right of user POC to QM1
dspmqaut -m QM1 -n LQ1 -t q -p poc
# if you want to give access, you should use
setmqaut -m QM1 -n LQ1 -t q -p poc <access Types>
# eg (put everything - in the real live scenario, choose only what you want to grant) :
setmqaut -m QM1 -n LQ1 -t q -p poc +put +get +browse +inq +set +crt +dlt +chg +dsp +passid +setid +setall +clr
Then dont forget to restart QM1 with
endmqm -i QM1
strmqm QM1
Finally, you should be able to proceed without error 2035.
I have been struggling with this for ages too. Eventually I found this solution. (If you can call turning off authentication a solution.)
I am using version
- IBM Websphere 9.1.0.201807091223
From IBM's website they advise turning connection authentication off!!!
Resolving the problem Disable channel authentication
You will need to disable connection authentication, at least
temporarily. There are known issues in FTM for Check with regard to
using MQ connection authorization. These problems are actively being
addressed and fixes will appear in a future fix pack. The target is
fixpack 3.0.0.8.
Steps to disable connection authentication: Open MQ command console
and type runmqsc ALTER
AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS)
CHCKCLNT(NONE) CHCKLOCL(NONE) Restart the queue manager for this
change to take effect.
Source http://www-01.ibm.com/support/docview.wss?uid=swg21962081
On this topic if you are using MQSeries 9.1 in a test or development environment you can disable channel authentication with the following approach :
. Launch MQ command line utility with the following :
runmqsc (for example runmqsc QM1)
. Disable authentication for all channels with the following command
ALTER QMGR CHLAUTH (DISABLED)
For a Q/Q-manager running on Windows, you may have to create the user on the Q/Q-manager machine [i.e. create a user on the Q-machine to match the user on the Q-client machine], and then add that user to the 'mqm' group on that machine.
Steps:
Ensure that the domain user that is being used to create the Q CLIENT [i.e. the user that the Q-client app is running under] also exists on the box with the Q/Q-manager. You may be able to just create a local user on the Q/Q-manager box [, or you may have to do some more complicated creation of an Active Directory user - I can't help you there].
On the Q/Q-manager box, add the user you have just created [or the existing one, if it already exists] to the mqm group. [On a Windows server box you will need to use the Microsoft Management Console (1. 'mmc' from the command line, 2. File > Add/Remove SnapOn > Local Users & Groups, 3. add user to group)]. The 'mqm' group should already exist on the Q/Q-manager machine.
Error MQRC 2035 basically means that your application has been able to connect to the queue manager, however due to certain absence of permissions/authorizations, it was unable to put/get/publish/subscribe messages.
To resolve this, at first, try these steps in order to disable the authorizations from queue manager and channel. Use this only if it isn't a production queue manager.
Always check the queue manager logs. It tells you exactly where you need to look into, and resolve the issue.
In this case, generally, you can issue the following commands after doing a runmqsc on the queue manager :
ALTER QMGR CHLAUTH(DISABLED)
Then set the chckclnt object(under authinfo) to optional
DISPLAY QMGR CONNAUTH
DISPLAY AUTHINFO(name-from-above) ALL //name from the first commands
ALTER AUTHINFO(name-from-above) AUTHTYPE(IDPWOS) ADOPTCTX(YES)
ALTER AUTHINFO(name-from-above) AUTHTYPE(IDPWOS) CHCKCLNT(OPTIONAL)
REFRESH SECURITY TYPE(CONNAUTH)
SET CHLAUTH('*') TYPE(BLOCKUSER) ACTION(REMOVEALL)
This helps remove any blocks that the channel is creating against any user.
SET CHLAUTH(your channel name) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL)
This should resolve your issue, since we have disabled every authorization that an application has to pass in order to do anything on a queue manager.
Now, in case you are using a production queue manager, NEVER remove authorizations.
Go, and right click on any QM that you have configured in your MQ explorer. Go to the QM authority, and authority records. Click on create new user, and give the same name as the username your application is using. Select all the checkboxes, then copy from the space below all the commands that are given. Namely, setmqaut. Edit with your queue manager name, and issue them!
----Never give up, the answer is where you have not looked yet--------