Im using docker (from symfony docs - https://github.com/dunglas/symfony-docker) and in symfony 6 i've reciceved:
Failed to connect to localhost port 443 after 0 ms: Connection refused for "https://localhost/products".
This addres in browser returns json with products.
This is controller:
<?php
namespace App\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Annotation\Route;
use Symfony\Component\HttpClient\HttpClient;
use Symfony\Component\HttpFoundation\JsonResponse;
class ApiController extends AbstractController
{
#[Route('/apire', name: 'api')]
public function fetchGitHubInformation()
{
$client = HttpClient::create();
$response = $client->request('GET', 'https://localhost/products');
$content = $response->getContent();
$content = $response->toArray();
return new JsonResponse($content);
}
}
and this is netstat from caddy docker container:
/srv/app # netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.11:45991 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:2019 0.0.0.0:* LISTEN 1/caddy
tcp 0 0 :::443 :::* LISTEN 1/caddy
tcp 0 0 :::80 :::* LISTEN 1/caddy
Where is the problem?
Solution comes from here - https://www.youtube.com/watch?v=1cDXJq_RyNc
When i change to http://caddy/products - everything works.
Related
I configured the server as below
Coturn-4.5.1.1 'dan Eider'
tls-listening-port=5349
fingerprint
use-auth-secret
server-name=turn.***.com
realm=turn.****.com
verbose
cert=/etc/coturn/certs/turn.***.com.fullchain.pem
pkey=/etc/coturn/certs/turn.***.com.privkey.pem
dh-file=/etc/coturn/certs/ssl-dhparams.pem
mobility
min-port=49152
max-port=65535
Nginx ( the problem is not Nginx because the problem is still alive when I don't use Nginx )
stream {
...
...
error_log /var/log/nginx/str.error.log;
upstream turnTls {
server turn_tls_IP:5349;
}
map $ssl_preread_server_name $upstream {
....
....
...
turn.****.com turnTls;
}
server {
error_log /var/log/nginx/xxx.err.log;
listen 443;
listen [::]:443;
proxy_pass $upstream;
ssl_preread on;
proxy_buffer_size 10m;
}
}
When I access the server with Android phones with turns protocol like
{
'urls': ['turns:turn.***.com:443?transport=tcp'],
'username': $username,
'credential': $password,
}
The server cannot get user credentials, and the server log is as follows
7: session 002000000000000001: closed (2nd stage), user <> realm <turn.****.com> origin <>, local ****:5349, remote ***:53712, reason: TLS/TCP socket buffer operation error (callback)
As you can see, the user's access user <> information is empty and I got
reason: TLS/TCP socket buffer operation error (callback)
with Trickle ICE tools sometimes work
0.783 Done
0.782 relay 2831610 udp ***** 65082 0 | 31519 | 255 turns:turn.***.com:443?transport=tcp tls
Coturn log
session 000000000000000025: new, realm=<turn.****.com>, username=<1674486335:user_80_156>, lifetime=600, cipher=ECDHE-RSA-AES256-GCM-SHA384, method=TLSv1.2
I did the following but the problem was not solved
disable some TlS protocols
no-tlsv1
no-tlsv1_1
no-tlsv1_2
no-tlsv3
...
I copied lets encrypt keys to /etc/coturn which is chmodded with 600 and owned by turnserver:turnserve
I stopped NGINX and contacted Turn directly via TLS on port 443
With Nginx, I decrypt in server block and then transferred it to the Turn server
stream {
server {
listen 443 ssl;
ssl_certificate ... fullchain.pem;
ssl_certificate_key ... privkey.pem;
ssl_dhparam ... dhparam.pem;
proxy_ssl off;
proxy_pass turn_Ip_NoTLS:3478;
}
}
I tested in many android device with ISRG Root X1 and DST Root CA X3
I have a working openresty with lua-resty-openidc as ingress controller.
Now, the nginx.conf is hardcoded in my image, with something like this :
server {
server_name _;
listen 80;
location /OAuth2Client {
access_by_lua_block {
local opts = {
discovery = "/.well-known/openid-configuration",
redirect_uri = "/authorization-code/callback",
client_id = "clientID",
client_secret = "clientSecret",
scope = "openid profile somethingElse",
}
...
}
proxy_pass http://clusterIp/OAuth2Client;
}
}
As Nginx doesn't accept environment variables, is there a simple way to make my nginx.conf configurable, for ex
server {
server_name ${myServerName};
listen ${myServerPort};
location /${specificProjectRoot} {
access_by_lua_block {
local opts = {
discovery = "${oidc-provider-dev-url}/.well-known/openid-configuration",
redirect_uri = "${specificProjectRoot}/authorization-code/callback",
client_id = "${myClientId}",
client_secret = "${myClientSecret}",
scope = "${myScopes}",
}
...
}
proxy_pass http://${myClusterIP}/${specificProjectRoot};
}
}
so that whatever team in whatever namespace could reuse my image and just provide a kubernetes secret containing their specific config for their project ?
You would need to render the nginx.conf from a templated version at runtime (as Juliano's comment mentions). To do this, your Dockerfile could look something like this:
FROM nginx
COPY nginx.conf.template /etc/nginx/
CMD ["/bin/bash", "-c", "envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Notice that it copies nginx.conf.template into your image, this would be your templated config with variables in the form ${MY_SERVER_NAME} where MY_SERVER_NAME is injected into your pod as an environment variable via your Kubernetes manifest, from your configmap or secret or however you prefer.
While envsubst is a good workaround to connect Kubernetes objects with container files, Kubernetes native ConfigMaps are designed precisely for this purpose: passing non-sensitive key-value data to the container, including entire files like your nginx.conf.
Here's a working example (in the question AND answer) of a ConfigMap and Deployment pair specifically for NGINX:
Custom nginx.conf from ConfigMap in Kubernetes
I have started a openresty with one tcp server and two backends. The tcp server dispatch the request to backends according to the content from tcp stream. Following is an example of openresty configuration:
stream {
# define a TCP server listening on the port 1234:
upstream backend1 {
server 172.17.0.1:8081;
}
upstream backend2 {
server 172.17.0.1:8082;
}
server {
listen 1234;
content_by_lua_block {
local sock = ngx.req.socket( true )
-- reveive first byte
local data, err = sock:receive( 1 )
--dispatch two backend1 if data is greater than 'a', otherwise dispatch to backend2
local a = string.byte(data, 1, 1 )
if a > 'a' then
--how to send to backend1
else
--how to send to backend2
end
}
}
}
I don't know how to make a bridge between the request and the backend according to the first byte in the request with lua script.
If anyone can help one this?
The question is pretty old, but I hope that my answer is still relevant for you.
stream {
lua_code_cache on;
init_by_lua_block {
-- cache package on startup
require('ngx.balancer')
-- share backend addresses via global table
-- (not recommended, only for demo purposes)
_G.BACKENDS = {
{'172.17.0.1', 8081},
{'172.17.0.1', 8082},
}
}
upstream lua_dispatcher {
# just an invalid address as a placeholder
server 0.0.0.1:1234;
balancer_by_lua_block {
local balancer = require('ngx.balancer')
local backend_index
if ngx.ctx.request_first_byte > 'a' then
backend_index = 1
else
backend_index = 2
end
local backend_table = _G.BACKENDS[backend_index]
local ok, err = balancer.set_current_peer(table.unpack(backend_table))
if not ok then
ngx.log(ngx.ERR, err)
ngx.exit(ngx.ERROR)
end
}
}
# proxy
server {
listen 9000;
proxy_pass lua_dispatcher;
# cosocket API not available in balancer_by_lua_block,
# so we read the first byte here and keep it in ngx.ctx table
preread_by_lua_block {
local sock = ngx.req.socket()
local data, err = sock:receive(1)
if not data then
ngx.log(ngx.ERR, err)
ngx.exit(ngx.ERROR)
end
ngx.ctx.request_first_byte = data:sub(1, 1)
}
}
# mock upstream 1
server {
listen 172.17.0.1:8081;
content_by_lua_block {
ngx.say('first')
}
}
# mock upstream 2
server {
listen 172.17.0.1:8082;
content_by_lua_block {
ngx.say('second')
}
}
}
$ nc -C localhost 9000 <<< '123'
second
$ nc -C localhost 9000 <<< '223'
second
$ nc -C localhost 9000 <<< 'a23'
second
$ nc -C localhost 9000 <<< 'b23'
first
$ nc -C localhost 9000 <<< 'c23'
first
i've a problem with the router generator of symfony
when i'm running
dump($this->container->get('router'));exit;
on a controller my router context is like this
#context: RequestContext {#306 ▼
-baseUrl: "/my-project/web/app_dev.php"
-pathInfo: "/accueil"
-method: "GET"
-host: "localhost"
-scheme: "http"
-httpPort: 82
-httpsPort: 443
-queryString: ""
-parameters: array:1 [▶]
}
But the same code but on mailer service i get this #context: Symfony\Component\Routing\RequestContext {#312
-baseUrl: ""
-pathInfo: "/accueil"
-method: "GET"
-host: "localhost"
-scheme: "http"
-httpPort: 80
-httpsPort: 443
-queryString: ""
-parameters: []
}
i found this problem after getting urls like
"http://localhost/bundleRoute/myRoute/7" instead of
"http://localhost/my-project/web/app_dev.php/bundleRoute/myRoute/7"
THANKS.
You can configure the request context for your application when parts of it are executed from the command-line: http://symfony.com/doc/current/cookbook/console/sending_emails.html#configuring-the-request-context-globally
I use Tornado and write some tests. And its everything fine.
Then I have used nginx for proxy:
server {
listen 80;
server_name mine.local;
location / {
proxy_pass http://localhost:8000;
}
}
It work nice. But.
In tests I use AsyncHTTPTestCase and get_app method, which returns Application.
The problem is: tests "looks" on default 127.0.0.1:8000 - Tornado starts on the port 8000, and all self.app.reverse_url('name') returns 127.0.0.1:8000/path.
But I need, that all requests from tests go to nginx (proxy):
mine.local/path
In hosts I have:
mine.local 127.0.0.1
In nginx I use some lua-scripts, that do all dirty-work. So I need, that tests make requests on mine.local, not on default 127.0.0.1:8000.
How to do this?
Thanks!
def bind_unused_port():
"""Binds a server socket to an available port on localhost.
Returns a tuple (socket, port).
"""
[sock] = netutil.bind_sockets(8000, 'localhost', family=socket.AF_INET)
port = sock.getsockname()[1]
return sock, port
class MineTestCase(AsyncHTTPTestCase):
def setUp(self):
super(AsyncHTTPTestCase, self).setUp()
sock, port = bind_unused_port()
self.__port = port
self.http_client = self.get_http_client()
self._app = self.get_app()
self.http_server = self.get_http_server()
self.http_server.add_sockets([sock])
def get_url(self, path):
url = '%s://%s:%s%s' % (self.get_protocol(), 'mine.local',
80, path)
return url