Why am I failing to connect to my linux instance? - wordpress

I have just launched an instance on AWS EC2 (free tier - t2.micro) - so I can start a Wordpress blog. I have tried connecting to this instance (using the basic 3 methods) so that I can download wordpress and get started. The problem is I am not able to connect to this instance using the 3 methods given.
I am running linux 18.04 on my laptop, so even on my AWS instance set up - I opted for Linux. When trying to connect with a standalone SSH client: I get this response ssh: connect to host ec2-198-51-100-1.compute-1.amazonaws.com port 22: Connection refused .
When using EC2 Instance Connect : There was a problem setting up the instance connection
An error occurred and we were unable to connect or stay connected to your instance. If this instance has just started up, try again in a minute or two. i get that response. With the last option of using Java SSH Client directly from my browser (Java required) - nothing happens when I click the launch ssh client blue button(it's as if it freezes). Has anyone else ever experienced this? how did they get through it?

To answer myself - here is the solution that worked:
Since I hadn't specified a key pair when launching the ec2 linux instance. I terminated the instance and launch a new one (made sure the key pair is specified this time). I replaced the .pem file of my key pair,with a new one.

Related

grpc in a ASP core host: context deadline exceeded

I am trying to connect to a grpc service in a ASP Core application that is in a windows 10 computer.
I want to connect with grpui. If I run grpcui in the same computer, without TLS, I can use this way:
grpcui -plaintext localhost:5110
Then I would like to connect from another computer (a virtualbox windows 10). So I use this command:
grpcui -plaintext 192.168.1.2:5110
But I get an error that tells "context deadline exceeded".
If I disabled the firewall in the service computer, then I get another error: "No connection could be made because the target machine actively refused it.". So the problem it seems that firewall in the server computer.
NOTE: I will not pay attention to this second error, I would like to solve first the first one. Later if I need, I will open another question for that, to avoid to mix two different problems in one question.
Then I have add 2 rules in the outbunds, one for the .exe file of the asp application and another for the conhost.exe file in windows\system32. This is because in the taskmanager it seems these are the 2 files that is running when I run the ASP application. I do the same for the inbounds rules.
But the problem is the same.
So which are the rules that I have to set in the firewall to can allow to connect to the service?
Thanks.

Unable to access Kafka Broker from separate LAN machine

EDIT: OBE - figured it out. Provided in answer for anyone else who has this issue.
I am working in an offline environment and am unable to connect to a kafka broker, on machine 1, from a separate machine, machine 2, on a LAN connection through a single switch.
Machine 1 (where Kafka and ZK are running):
server.properties
listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
advertised.listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
zookeeper.connect=localhost:2181
I am starting kafka/ZK from the config files located in kafka_2.12-2.8.0/config and the running the appropritate .bat from kafka_2.12-2.8.0/bin/windows.
On machine 2 I am able to ping <ethernet_IPv4_m1> and get results; however, I fail to get a TCP connection if I run Test-NetConnection <ethernet_IPv4_m1> -p 9092 while kafka is running. In python 3.8.11, using KafkaConsumer from kafka-python, I receive the NoBrokersAvailable error when using <ethernet_IPv4_m1>:9092 as the bootstrap_server. Additionally if I run a python:3.8.12-buster docker container with a '/bin/bash' entrypoint, and follow along with the kafka-listener walkthrough I am unable to connect to the broker. I'm in the exact situation as Scenario 1 provided in the link, but the walkthrough assumes you can connect to the broker. I have also tried opening the 9092 port in my Windows Defender for in/outbound traffic (on both machines) and still have no luck. Neither Kafka, nor networking, are my strong suits and every tutorial/answer I find refers to changing the listener and advertised.listener in the kafka server.properties file - I think I correctly did this, but am unsure. This is everything I have tried so far, any recommendations would be greatly appreciated. Thank you.
For M1, the private network was the active network.
Go to control panel -> Firewall & network protection -> advanced settings (must be admin) -> setup inbound/outbound rules for port 9092 for the active network.

corda CENM networkmap server start failing to connect database after a few week run

we operate CENM(1.2 and use helm template to run on k8s cluster) to construct our own private network and keep on running CENM network map server for a few week, then launching new node start failing.
with further investigation, its appeared that request timeout for http://nmap:10000/network-map causes problem.
in nmap server’s log, we found following output when access to above url with curl.
[NMServer] - Error while handling socket client message com.r3.enm.servicesapi.networkmap.handlers.LatestUnsignedNetworkParametersRetrievalMessage#760c53ea: HikariPool-1 - Connection is not available, request timed out after 30000ms.
netstat shows there is at least 3 establish connection to the database from the container which network map server runs, also I can connect database directly with using CLI.
so I don’t think it is neither database saturated nor network configuration problem.
anyone have an idea why this happens? I think restart probably solve the problem, but want to know the root cause...
regards,
Please test the following options.
Since it is the HikariCP (connection pool) component that is throwing the error it would be worth seeing if increasing the pool size in the network map configuration may help - see below)
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
Has a healthcheck been conducted to verify there are sufficient resources on that postgres database i.e basic diagnostic checks ?
Another option to get more information logged from the network map service is to run with TRACE logging also:
From https://docs.corda.net/docs/cenm/1.2/troubleshooting-common-issues.html
Enabling debug/trace logging
Each service can be configured to run with a deeper log level via command line flags passed at startup:
java -DdefaultLogLevel=TRACE -DconsoleLogLevel=TRACE -jar <enm-service-jar>.jar --config-fi

AWS CodeDeploy vs Windows 2016 in ASG

I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.

How to get past the MongoDB port error to launch the examples?

I'm getting started with Meteor, using the examples:
https://www.meteor.com/examples/parties
If I deploy and load the deployment url ( http://radically-finished-parties-app.meteor.com/ ) , the app runs ... nothing magic there... it was an easy example
My issue occurs when I want to run it locally, I get the following message
"You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number"
I got meteor running through the terminal command:
meteor --port 3004
Setup:
- Mac OS 10.9
- Chrome 31
This is happening because you are accessing the mongodb port in your web browser.
When you run a meteor app, e.g on port 3004
Port 3004 would be a web proxy to port 3005
Port 3005 would be the meteor app in a 'raw' sort of sense (without the websockets part.. i think)
Port 3006 would be the mongodb (which you are accessing).
Try using a different port. Or use a simpler port e.g just run meteor and access port 3000 in your web browser.
If the reason you moved the port number up because it said the port is in use the meteor app may not have exited properly on your computer. Restart your machine or have a look at activity monitor to kill the rogue node process.
I think what might have happened is you ran in on 3000, then moved the ports up and the previous one may have not been exited correctly so what you're seeing is a mongodb instance of a previous meteor instance.
This happens when you run another meteor on port 2999, forget about it and try to start a second instance on the usual port.
Try making sure Meteor is using the local embedded mongo db, which it will manage on its own:
export MONGO_URL=''
Something changed in my bash settings that I didn't copy over to zsh. I uninstalled zsh and meteor can now find and access mongo.

Resources