We have several servers in our infra for which we are unable to trace the Openstack Project Details.
Is there any way to fetch the associated project ID/ Name details from the VM?
On my cloud:
$ openstack server show ab852bda-978e-4fd0-ba60-f4eebab327d3 -c project_id
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| project_id | dfe697576058427d96d59bf45433636d |
+------------+----------------------------------+
In VM,
ip a
So, learn ip of VM
(actually, you should be knowing IP, if you are able to connect it..)
Then, in openstack cli, filter according to ip addresses.. This way, you can learn which project it belongs to...
Related
I am trying to get the databases backed up from a local machine to a datastore. The local machine is a VM within VCenter and I need the DB to go onto one of it's datastores.
I know the command for MySQLDump is:
mysqldump -u (username) -p --all-databases > (backupfilename).sql.
What do I put in the second section to get it to connect and push the backup to the datastore as a file?
I tried the typical stuff you'd scp and rsync for but I'm not that versed in MariaDB, especially this version we have.
If you know login credentials and hostname (and whatever extra arguments you need) to connect to remote instance, you may replace > backupfilename.sql with | mysql -U user -P password -H host .... to pipe mysqldump directly or even use | tee somefile.sql | mysql .... to also have a local dump.
Mariadb vs mysql is almost inerchangeable, but dont bet your life on this assumption - check the outcome.
Nb: if remote server is only listening locally and not available via internet, you may use ssh port forwarding/socket forwarding to connect.
We need to identify all network traffic that a specific Android/iOS app induces. The app is using Firestore in the backend. By default, connections to Firestore always use the domain firestore.googleapis.com instead of a project-specific subdomain (like Cloud Functions do, for example). This way those connections can't be related to a specific app by only examining the outgoing or incoming network traffic of the device.
Is it possible to route the traffic through a proxy or similar to be able to identify connections uniquely?
+-----+ +---------------+ +----------------------------+
| App | ----> | Reverse Proxy | ----> | Firestore |
| | <---- | (mydomain.com)| <---- | (firestore.googleapis.com) |
+-----+ ^ +---------------+ +----------------------------+
|
|
Connections that must be
uniquely identifiable
for a specific app
Is this possible with Firestore (at least, there's a function setHost() in the client SDK) and if so, what drawbacks would it have?
You can try to create a reverse proxy server and install on it the Firebase Emulator. You can then connect to this server from your app. The emulator will receive your requests and redirect them to Firestore. This will give you some flexibility and achieve your use case to some extent. read more about the Firebase Emulator here
I would like to communicate with AWS Batch jobs from a local R process in the same way that Davis Vaughn demonstrated for EC2 at https://gist.github.com/DavisVaughan/865d95cf0101c24df27b37f4047dd2e5. The AWS Batch documentation describes how to set up a key pair and security group for batch jobs. However, I could not find detailed instructions about how to find the IP address of a job's instance or what user name I need. The IP address in particular is not available in the console when I run the job, and aws batch describe-jobs --jobs prints out an empty "jobs": [] JSON string. Where do I find the information I need to ssh into a job's instance? (In my use case, I would prefer the IP address instead of the host name.)
Posting here in case this helps someone. The instance should show up under "Running instances" in your ec2 console. You should be able to use the public ip address specified there. Make sure you configured your batch job to use your ec2 key pair and the correct user name (ec2-user for amazon linux 2). eg. ssh -i "your_keypair.pem" ec2-user#XX.XXX.XX.XXX.
I am newbie to openstack. I am creating a stack using HEAT template. In the yaml file I mentioned the key name as
parameters: # Common parameters
key_name: my-key-pair
After the stack is created, I am able to ssh to all VMs from my control node without password, successfully like this:
ssh -i /root/my-key-pair.pem
user#instanceip
My requirement here is, similarly I need to do ssh between the VMs also. Just like I did between ControlNode and VMs, I wanted to do ssh without password from VM1 to VM2.
If I copy the pem file to VM1, then I can do ssh without password from this VM1 to other VMS like
ssh -i /VM1-home/my-key-pair.pem
user#otherinstanceip
But, is there a way that this can be accomplished during stack creation itself? So that, immediately after stack creation via heat template, I can ssh from any instance to other instances?
Can someone help please.
Thank You,
Subeesh
You can do this without HEAT.
You should be able to make use of ssh agent forwarding.
steps:
Start the ssh-agent in the background.
eval "$(ssh-agent -s)"
Add your key to the agent
ssh-add /root/my-key-pair.pem
Then ssh into the first host, you should be able to jump between servers now.
The way of doing it with HEAT would be to place that pem file into the correct location on the created instances, this should be possible with the personality function
personality: {"/root/my-key-pair.pem": {get_file: "pathtopemfilelocaly"}}
I am planning to ship a "home server" type device to customers, that communicates with their (Android or iPhone) smart phone. The problem is that, depending on their internet service provider, the customer has no outside-reachable IPv4 address (DS-lite tunneling), so the smart phone can't just use an IPv4 DNS record to find the server.
Alternatives I can think of:
Make the server use an IPv6 DynDNS service, and make IPv6 take preference over IPv4 on the smart phone. Since the solution should work without the customer having to sign up for a DynDNS service, I have not found any service that allows me to do that.
Set up my own "directory server", such that the home server registers it's serial number in intervals - so similar like DynDNS, but on the application layer via HTTPS. A client could then simply enter the serial number into the app to find the server. Due to authentication/encryption requirements, this solution is harder to implement than I like.
Any other ideas on how to make a home server reachable? I would really like to avoid running my own "cloud service". Some type of peer to peer network discovery, perhaps?
[UPDATE:] This is what I am essentially looking for:
Home server Relay DynDNS Client
| | | |
|-------- open tunnel to port 80 ----->| | |
|<-success, listening on 192.0.2.1:80 -| | |
| | | |
|----- Register "my.ddns.net" ---------------------->| |
|<------------ "my.ddns.net" is now 192.0.2.1 -------| |
| | | |
| |<- GET http://my.ddns.net -|
|<------- GET http://my.ddns.net ----| | |
|--- HTTP response ------------------->| | |
| |----- HTTP response ------>|
Making connection from the internet to a server in a home is difficult. IPv6 is not available everywhere yet and with IPv4 you don't always have a public address available (with multiple NAT layers or DS-Lite).
The only reliable solution today is to have a publicly reachable server as rendezvous point and let the home box maintain a permanent collection to that server. Mobile devices (which might be behind NAT as well) can then reach the home box through the server or set up STUN/TURN style connectivity.
Thanks to the other responses, I had the starting points to find some existing solutions: ngrok and localtunnel solve the problem by mapping a dedicated subdomain to each Home Server, and dispatching requests based on HTTP(S) GET requests.
The latter is an open source project, and the server, as well as a javascript client are on Github.