How to point custom domain to VM instance - nginx

These are my dns records:
Name | Type | TTL | Target
A 3600 185.199.108.153
A 3600 185.199.109.153
A 3600 185.199.110.153
A 3600 185.199.111.153
www CNAME 3600 pushp1997.github.io
A 3600 34.71.130.252
Here, the first 5 entries are to open my static github pages site.
The last entry is the ip address of my VM instance on GCloud.
These are my nginx server settings:
server {
listen 80;
server_name pushp.ml;
location /linkedin {
return 302 https://in.linkedin.com/in/pushp-vashisht;
}
}
Now, if I try 34.71.130.252/linkedin it redirects me to https://in.linkedin.com/in/pushp-vashisht.
But, when I try pushp.ml/linkedin it shows a 404 page of Github Pages.
How do I make pushp.ml/linkedin to redirect to https://in.linkedin.com/in/pushp-vashisht?
Edit:
On running dig command:
$ dig pushp.ml
; <<>> DiG 9.11.3-1ubuntu1.12-Ubuntu <<>> pushp.ml
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36898
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;pushp.ml. IN A
;; ANSWER SECTION:
pushp.ml. 3600 IN A 185.199.109.153
pushp.ml. 3600 IN A 185.199.110.153
pushp.ml. 3600 IN A 34.71.130.252
pushp.ml. 3600 IN A 185.199.108.153
pushp.ml. 3600 IN A 185.199.111.153
;; Query time: 431 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Fri Jul 17 17:13:57 EDT 2020
;; MSG SIZE rcvd: 117
In answer section we can see that ip addresses of github pages server as well as my vm instance are there.

Related

VNF do not forward packets sent from Client in Openstack using VNFF Graph

I'm trying to ping from Client to 8.8.8.8 via VNF1 so I use VNFFG to force ICMP traffic of Client go through VNF1 before going out to internet.
After I apply the VNFFG rule in openstack, VNF1 can see MPLS packet encapsulated from Client's ICMP packet by openstack when I use tcpdump but the Forwarding Table of VNF1 do not receive any packet to continue forward that packet.
This is packet seen on VNF1:
09:15:12.161830 MPLS (label 13311, exp 0, [S], ttl 255) IP 12.0.0.58 > 8.8.8.8: ICMP echo request, id 10531, seq 15, length 64
I capture that packet, see that the content can be read (without encryption) and src, dst MAC belong to Client and VNF1 respectively.
This is my VNFFG template:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: demo chain
properties:
id: 51
policy:
type: ACL
criteria:
- name: block_icmp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 1
- name: block_udp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 17
path:
- forwarder: VNF1
capability: CP1
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: Traffic to server
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP1]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]
This is my VNF Descriptor:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 2 GB
disk_size: 20 GB
properties:
image: VNF1
availability_zone: nova
mgmt_driver: noop
key_name: my-key-pair
config: |
param0: key1
param1: key2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: my-private-network
vendor: Tacker
FIP1:
type: tosca.nodes.network.FloatingIP
properties:
floating_network: public
requirements:
- link:
node: CP1
I used this command to deploy VNFGG rule:
tacker vnffg-create --vnffgd-template vnffg_test.yaml forward_traffic
I do not know if the problem can come from the key I defined for VNF1 because I do not know what param0: key0 and param1: key1 used for and where are they?
How can I resolve to make the VNF forward these packet.

How to make grafana on nixos available in local network

My laptop and my nixos-server (hostname=nixos) are both conected to my router (fritz.box). I can access the rooter via ping (ping nixos.fritz.box) and ssh (ssh username#nixos.fritz.box).
What I want is to follow the first part of this guide to set up grafana on nixos. I then want to be able to access grafana from my laptop.
On the server I have configured nixos to run both grafana and a reverse proxy (nginx):
services.grafana = {
enable = true;
domain = "grafana.nixos.fritz.box";
port = 2342;
addr = "127.0.0.1";
};
# nginx reverse proxy for grafana
services.nginx.virtualHosts.${config.services.grafana.domain} = {
locations."/" = {
proxyPass = "http://127.0.0.1:${toString config.services.grafana.port}";
proxyWebsockets = true;
};
};
# Open ports for http and https
networking.firewall.allowedTCPPorts = [ 80 443 ];
system.stateVersion = "21.03";
Unfortunatelly I can't access the grafana webinterface from my laptop.
I tried changing around the value of services.grafana.domain and what I type into my browser (firefox/curl), here is what I got:
services.grafana.domain
argument of curl
output of curl
grafana.nixos.fritz.box
http://grafana.nixos.fritz.box/
curl: (6) Could not resolve host: grafana.nixos.fritz.box
grafana.nixos.fritz.box
https://grafana.nixos.fritz.box/
curl: (6) Could not resolve host: grafana.nixos.fritz.box
grafana.nixos.fritz.box
http://nixos.fritz.box/
curl: (52) Empty reply from server
grafana.nixos.fritz.box
https://nixos.fritz.box/
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to nixos.fritz.box:443
nixos.fritz.box
http://nixos.fritz.box/
curl: (52) Empty reply from server
nixos.fritz.box
https://nixos.fritz.box/
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to nixos.fritz.box:443
grafana.localhost
(on the server) http://grafana.localhost
curl: (7) Failed to connect to grafana.localhost port 80: Connection refused
grafana.localhost
(on the server) https://grafana.localhost
curl: (7) Failed to connect to grafana.localhost port 443: Connection refused
Especially the last 2 lines leave me perplexed.
netstat -an | grep LISTEN on the server gives me this:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:2342 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
unix 2 [ ACC ] STREAM LISTENING 1837 /run/systemd/private
unix 2 [ ACC ] STREAM LISTENING 1841 /run/systemd/userdb/io.systemd.DynamicUser
unix 2 [ ACC ] SEQPACKET LISTENING 1853 /run/systemd/coredump
unix 2 [ ACC ] STREAM LISTENING 1862 /run/systemd/journal/stdout
unix 2 [ ACC ] SEQPACKET LISTENING 1868 /run/udev/control
unix 2 [ ACC ] STREAM LISTENING 26958 /var/run/nscd/socket
unix 2 [ ACC ] STREAM LISTENING 1905 /run/systemd/journal/io.systemd.journal
unix 2 [ ACC ] STREAM LISTENING 12193659 /run/user/1001/bus
unix 2 [ ACC ] STREAM LISTENING 12205464 /run/user/1001/systemd/private
unix 2 [ ACC ] STREAM LISTENING 13312 /nix/var/nix/daemon-socket/socket
unix 2 [ ACC ] STREAM LISTENING 18416 /var/run/dhcpcd.sock
unix 2 [ ACC ] STREAM LISTENING 18418 /var/run/dhcpcd.unpriv.sock
unix 2 [ ACC ] STREAM LISTENING 13308 /run/dbus/system_bus_socket
I don't know how to make grafana available in the local network. Can someone help me with that, please?
(I know this question is somewhat similar to this one, but the solution there doesn't help me)
Adding the following line solved my problem (thanks to #Tch):
services.nginx.enable = true;

Nginx failing to resolve upstream with custom DNS resolver

docker run --rm --net=host -v $PWD/default.conf:/etc/nginx/conf.d/default.conf nginx
2019/05/12 17:02:49 [emerg] 1#1: host not found in upstream "tickethub.service.consul" in /etc/nginx/conf.d/default.conf:10
nginx: [emerg] host not found in upstream "tickethub.service.consul" in /etc/nginx/conf.d/default.conf:10
While dig shows the DNS record correctly:
dig #127.0.0.1 -p 8600 tickethub.service.consul
; <<>> DiG 9.12.3-P1 <<>> #127.0.0.1 -p 8600 tickethub.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57394
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;tickethub.service.consul. IN A
;; ANSWER SECTION:
tickethub.service.consul. 0 IN A 172.23.0.6
tickethub.service.consul. 0 IN A 172.23.0.5
tickethub.service.consul. 0 IN A 172.23.0.7
;; ADDITIONAL SECTION:
tickethub.service.consul. 0 IN TXT "consul-network-segment="
tickethub.service.consul. 0 IN TXT "consul-network-segment="
tickethub.service.consul. 0 IN TXT "consul-network-segment="
;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Sun May 12 16:58:54 GMT 2019
;; MSG SIZE rcvd: 209
And my nginx config:
server {
listen 80;
server_name localhost;
location / {
resolver 127.0.0.1:8600;
proxy_pass http://tickethub.service.consul;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
What may be the issue?
This worked when I explicitly set the DNS on the docker container to 127.0.0.1 which means Nginx is probably trying to resolve it WITHOUT using the resolver specified argh...
I think I also had to change the DNS port to 53 instead of the explicit 8600
Or something...
Probably a bunch of nginx bugs...
mic drop
It worked when I set the proxy_pass using a variable:
location / {
resolver consul;
set $endpoint tickethub.service.consul;
proxy_pass http://$endpoint/;
}

Asterisk ARI / phpari - Bridge recording: "Recording not found"

I'm using phpari with Asterisk 13 and trying to record a bridge (mixing type).
In my code:
$this->phpariObject->bridges()->bridge_start_recording($bridgeID, "debug", "wav");
It returns:
array(4) {
["name"]=>
string(5) "debug"
["format"]=>
string(3) "wav"
["state"]=>
string(6) "queued"
["target_uri"]=>
string(15) "bridge:5:1:503"
}
When and I stop and save with
$this->phpariObject->recordings()->recordings_live_stop_n_store("debug");
It returns FALSE.
I debug with
curl -v -u xxxx:xxxx -X POST "http://localhost:8088/ari/recordings/live/debug/stop"
Result:
* About to connect() to localhost port 8088 (#0)
* Trying ::1... Connection refused
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8088 (#0)
* Server auth using Basic with user 'xxxxx'
> POST /ari/recordings/live/debug/stop HTTP/1.1
> Authorization: Basic xxxxxxx
> User-Agent: curl/7.19.7 (xxxxx) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8088
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: Asterisk/13.2.0
< Date: Thu, 19 Feb 2015 11:58:18 GMT
< Cache-Control: no-cache, no-store
< Content-type: application/json
< Content-Length: 38
<
{
"message": "Recording not found"
* Connection #0 to host localhost left intact
* Closing connection #0
}
Asterisk CLI verbose 5 trace: http://pastebin.com/QZXnpXVA
So, I've solved the problem.
It was a simple write permission problem.
Asterisk user couldn't write on /var/spool/asterisk/recording because it was owned by root.
Changing the ownership to the asterisk user solved it.
I detected this problem by looking at the Asterisk CLI trace again:
-- x=0, open writing: /var/spool/asterisk/recording/debug format: sln, (nil)
This (nil) indicates that the file could not be written, so I checked the folder and saw where the problem was.

Async responses with Aleph aren't being received over IPv4 but are with IPv6

I'm trying to get server-sent events set up in Clojure with Aleph, but it's just not working over IPv4. Everything is fine if I connect over IPv6. This occurs both on Linux and MacOS. I've got a full example of what I'm talking about on GitHub.
I don't think I'm doing anything particularly fancy. The whole code is up on GitHub, but essentially my program is:
(def my-channel (permanent-channel))
(defroutes app-routes
(GET "/events" []
{:headers {"Content-Type" "text/event-stream"}
:body my-channel}))
(def app
(handler/site app-routes))
(start-server (wrap-ring-handler app) {:port 3000}))
However, when I connect to 127.0.0.1:3000, I can see curl sending the request headers, but it just hangs, never printing the response headers:
$ curl -vvv http://127.0.0.1:3000/events
* About to connect() to 127.0.0.1 port 3000 (#0)
* Trying 127.0.0.1...
* Adding handle: conn: 0x7f920a004400
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f920a004400) send_pipe: 1, recv_pipe: 0
* Connected to 127.0.0.1 (127.0.0.1) port 3000 (#0)
> GET /events HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 127.0.0.1:3000
> Accept: */*
If I connect over IPv6 the response comes right away, and events that I enqueue in the channel get sent correctly:
$ curl -vvv http://localhost:3000/events
* Adding handle: conn: 0x7f943c001a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f943c001a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 3000 (#0)
* Trying ::1...
* Connected to localhost (::1) port 3000 (#0)
> GET /events HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:3000
> Accept: */*
>
< HTTP/1.1 200 OK
* Server aleph/0.3.0 is not blacklisted
< Server: aleph/0.3.0
< Date: Tue, 15 Apr 2014 12:27:05 GMT
< Connection: keep-alive
< Content-Type: text/event-stream
< Transfer-Encoding: chunked
I have also reproduced this behaviour in Chrome. In both the IPv4 and IPv6 cases, tcpdump shows that the response headers are going over the wire.
This behaviour occurs both with lein run and an uberjar. It also occurs if I execute the uberjar with -Djava.net.preferIPv4Stack=true.
How do I get my application to behave the same over IPv4 as over IPv6?

Resources