Dynamic host in axios - symfony

How can I create a dynamic host in axios?
Example:
const host = location.hostname;
// axios.defaults.baseURL = 'http://hastore.local';
axios.defaults.baseURL = host;
axios.defaults.port = 8080;
axios.get('api/categories')
.then((res) => {
this.categories = res.data;
console.log(res);
})
.catch((err) => {
console.warn('error during http call', err);
});
String axios.defaults.baseURL = 'http://hastore.local'; does not fit, because on prodaction don't be work.
String const host = location.hostname; also not a solution, because I get incorrect port and dublicate host.
The goal is to get the right host depending on the environment.
I read a lot of articles about this, but I did not find a solution.
Thanks for help!
axios version: e.g.: v0.16.2
Environment: e.g.: node v8.9.4, chrome 64.0.3282.119, Ubuntu 16.04
Symfony 4.0.4
Vue.js 2.4.2
vue-axios 2.0.2

You probably do not need to set baseURL. Have you tried to define baseURL each time you make requests? For example,
axios.get(`${host}:${port}/api/categories`)
Or, depending on your words "The goal is to get the right host depending on the environment.", you may define proper host using your environment variables, for example:
axios.get(`${proccess.env.HOST}:${PORT}/api/categories`)
If you use a bundler for your frontend code, this expression will be resolved to
axios.get('http://example.com:80/api/categories')
That's because your bundler actually should run with pre-defined environment variables HOST and PORT

I found a solution. Add this code in your: /src/main.js
Example:
const baseURL = 'http://localhost:8080';
if (typeof baseURL !== 'undefined') {
Vue.axios.defaults.baseURL = baseURL;
}
The answer that helped me in this: Set baseURL from .env in VueAxios
Thanks everyone for help!

Related

firebase remove function name from url

I have a simple express app like this.
const express = require("express");
var app = express();
app.get("/", (req, res)=>{
...handle request...
});
...
exports.app = functions.https.onRequest(app);
I am deploying on firebase. and when I deploy it it creates a new route by the name of the function like this: https://us-central1-[projectname].cloudfunctions.net/app
this is because I put exports.app = functions.https.onRequest(app);
so how can I deploy and make it work without the /app at the end? I need this because I am having issue with the references on my front end which use routes like "/login", which in this case won't work since it had added app requiring all routes to be like "/app/login"
I even tried export default app but no luck.
how can I deploy with out the function name as a route?
Removing /[exportname] from the first part of a URL hosted on cloudfunctions.net is not possible as this is how the functions are triggered.
Ideally rather than serve resources from the cloudfunctions.net domain, you place your functions behind Firebase Hosting where you can instead use a URL like https://yourapp.example.com/login which will play nicely with Express.
However, if you wish to call https://us-central1-[projectname].cloudfunctions.net/app/login and have it behave as if it was called from https://us-central1-[projectname].cloudfunctions.net/login, you can make use of a conditional URL rewrite. The example below will strip /app from the URL if-and-only-if the hostname ends in cloudfunctions.net and the URL also starts with "/app", then handing over to the other routes.
import express from "express";
function removePathForCloudFunctionsDomain(path) {
return function(req, res, next) {
const rawUrl = req.url; // stash original URL
// do nothing if not on cloudfunctions.net or path doesn't match
if (!req.hostname.endsWith("cloudfunctions.net") || !rawUrl.startsWith(path)) {
return next();
}
// if here, trim path off of the request's URL
req.url = req.originalUrl = rawUrl.slice(path.length);
// hand over to other app.get(), app.use(), etc.
next('route');
}
}
const app = express();
app.use(removePathForCloudFunctionsDomain("/app"));
/* other routes */
export app;

How to use multiple cookies in Firebase hosting + Cloud Run? [duplicate]

i followed the sample of authorized-https-endpoint and only added console.log to print the req.cookies, the problem is the cookies are always empty {} I set the cookies using client JS calls and they do save but from some reason, I can't get them on the server side.
here is the full code of index.js, it's exactly the same as the sample:
'use strict';
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
const express = require('express');
const cookieParser = require('cookie-parser')();
const cors = require('cors')({origin: true});
const app = express();
const validateFirebaseIdToken = (req, res, next) => {
console.log(req.cookies); //// <----- issue this is empty {} why??
next();
};
app.use(cors);
app.use(cookieParser);
app.use(validateFirebaseIdToken);
app.get('/hello', (req, res) => {
res.send(`Hello!!`);
});
exports.app = functions.https.onRequest(app);
store cookie:
curl http://FUNCTION_URL/hello --cookie "__session=bar" // req.cookies =
{__session: bar}
doesn't store:
curl http://FUNCTION_URL/hello --cookie "foo=bar" // req.cookies =
{}
If you are using Firebase Hosting + Cloud Functions, __session is the only cookie you can store, by design. This is necessary for us to be able to efficiently cache content on the CDN -- we strip all cookies from the request other than __session. This should be documented but doesn't appear to be (oops!). We'll update documentation to reflect this limitation.
Also, you need to set Cache-Control Header as private
res.setHeader('Cache-Control', 'private');
Wow this cost me 2 days of debugging. It is documented (under Hosting > Serve dynamic content and host microservices > Manage cache behavior, but not in a place that I found to be useful -- it is at the very bottom "Using Cookies"). The sample code on Manage Session Cookies they provide uses the cookie name session instead of __session which, in my case, is what caused this problem for me.
Not sure if this is specific to Express.js served via cloud functions only, but that was my use case. The most frustrating part was that when testing locally using firebase serve caching doesn't factor in so it worked just fine.
Instead of trying req.cookies, use req.headers.cookie. You will have to handle the cookie string manually, but at least you don't need to implement express cookie parser, if that's a problem to you.
Is the above answer and naming convention still valid? I can't seem to pass any cookie, to include a session cookie named "__session", to a cloud function.
I setup a simple test function, with the proper firebase rewrite rules:
export const test = functions.https.onRequest((request, response) => {
if (request.cookies) {
response.status(200).send(`cookies: ${request.cookies}`);
} else {
response.status(200).send('no cookies');
}
});
The function gets called every time I access https://www.xxxcustomdomainxxx.com/test, but request.cookies is always undefined and thus 'no cookies' is returned.
For example, the following always returns 'no cookies':
curl https://www.xxxcustomdomainxxx.com/test --cookie "__session=testing"
I get the same behavior using the browser, even after verifying a session cookie named __session was properly set via my authentication endpoint. Further, the link cited above (https://firebase.google.com/docs/hosting/functions#using_cookies) no longer specifies anything about cookies or naming conventions.

Unix Domain Sockets instead of host/port for TCP-type server

This is for all Node.js versions 6+
Say I have currently have a TCP server with multiple clients:
const server = net.createServer(s => {
});
server.listen(6000);
and I connect to it with clients:
const s1 = net.createConnection({port:6000});
const s2 = net.createConnection({port:6000});
const s3 = net.createConnection({port:6000});
TCP can sometimes be a bit slow on a local machine. I hear that there might be a way to substitute a host/port combination with Unix Domain Sockets, but maintain the TCP server style interface. Is this possible and how?
The Node.js docs mention you can create a server that listens on a path:
https://nodejs.org/api/net.html#net_server_listen_path_backlog_callback
but it doesn't specify what kind of file that needs to be and how to create that file.
It turns out on MacOS/Linux this is easy. You don't need to create the file. You need to make sure the file does not exist, and then point the Node.js core libraries to an empty path.
For the server:
const udsPath = path.resolve('some-path.sock');
const wss = net.createServer(s => {
});
wss.listen(udsPath, () => {
});
For the clients:
const udsPath = path.resolve('some-path.sock'); // same file path as above
const ws = net.createConnection(udsPath, () => {
});

View random ngrok URL when run in background

When I start an ngrok client with ./ngrok tcp 22 it runs in the foreground and I can see the randomly generated forwarding URL, such as tcp://0.tcp.ngrok.io:12345 -> localhost:22.
If I run in it the background with ./ngrok tcp &, I can't find any way to see the forwarding URL. How can I run ngrok in the background and still see the URL?
There are a couple of ways.
You can either:
1) Visit localhost:4040/status in your browser to see a bunch of information, or
2) Use curl to hit the API: localhost:4040/api/tunnels
This little Python (2.7) script will call the ngrok API and print the current URL's:
import json
import os
os.system("curl http://localhost:4040/api/tunnels > tunnels.json")
with open('tunnels.json') as data_file:
datajson = json.load(data_file)
msg = "ngrok URL's: \n'
for i in datajson['tunnels']:
msg = msg + i['public_url'] +'\n'
print (msg)
If you want to get the first tunnel then jq will be your friend:
curl -s localhost:4040/api/tunnels | jq -r .tunnels[0].public_url
When running more than one instance of ngrok then use the tunnel name /api/tunnels/:name.
Run ./ngrok http & This runs the ngrok tunnel as a background process. Ngrok usually opens a window showing the assigned URL but since we are using the nohup command this is not visible.
Thus, then run curl http://127.0.0.1:4040/api/tunnels too see the URL assigned by ngrok
If it helps anyone I wrote a quick script to extract the generated random url in Node:
It makes assumption you're only interested in the secure url.
const fetch = require('node-fetch')
fetch('http://localhost:4040/api/tunnels')
.then(res => res.json())
.then(json => json.tunnels.find(tunnel => tunnel.proto === 'https'))
.then(secureTunnel => console.log(secureTunnel.public_url))
.catch(err => {
if (err.code === 'ECONNREFUSED') {
return console.error("Looks like you're not running ngrok.")
}
console.error(err)
})
If you wanted all tunnels:
const fetch = require('node-fetch')
fetch('http://localhost:4040/api/tunnels')
.then(res => res.json())
.then(json => json.tunnels.map(tunnel => tunnel.public_url))
.then(publicUrls => publicUrls.forEach(url => console.log(url)))
.catch(err => {
if (err.code === 'ECONNREFUSED') {
return console.error(
"Looks like you're not running ngrok."
)
}
console.error(err)
})
import json
import requests
def get_ngrok_url():
url = "http://localhost:4040/api/tunnels/"
res = requests.get(url)
res_unicode = res.content.decode("utf-8")
res_json = json.loads(res_unicode)
for i in res_json["tunnels"]:
if i['name'] == 'command_line':
return i['public_url']
break
This is an edit of JUN_NETWORKS python 3 code. It outputs the HTTPS URL only. I find Ngrok will randomly change the order of which is URL is displayed first sometimes outputting HTTP. The additional loop will consistently look for the 'tunnel' named 'command_line' which is the HTTPS URL.
The easiest way for me to check random generated URL is to go to ngrok official site > dashboard > endpoints > status and check the URLs and status of my endpoints
In Python3
import json
import requests
def get_ngrok_url():
url = "http://localhost:4040/api/tunnels"
res = requests.get(url)
res_unicode = res.content.decode("utf-8")
res_json = json.loads(res_unicode)
return res_json["tunnels"][0]["public_url"]
This returned json have 2 url for http and https.
If you want only https url, you res_json["tunnels"][index num]["proto"]
If you love PowerShell, here it is in variables.
$ngrokOutput = ConvertFrom-Json (Invoke-WebRequest -Uri http://localhost:4040/api/tunnels).Content
$httpsUrl = $ngrokOutput.tunnels.public_url[0]
$httpUrl = $ngrokOutput.tunnels.public_url[1]
Use the ngrok API to get all active URLs
You will need to generate a token first (https://dashboard.ngrok.com/api)
Then fetch the active endpoints from the API
curl \
-H "Authorization: Bearer {API_KEY}" \
-H "Ngrok-Version: 2" \
https://api.ngrok.com/endpoints
check documentation: https://ngrok.com/docs/api/resources/endpoints
In Ruby
require 'httparty'
# get ngrok public url
begin
response = HTTParty.get 'http://localhost:4040/api/tunnels'
json = JSON.parse response.body
new_sms_url = json['tunnels'].first['public_url']
rescue Errno::ECONNREFUSED
print 'no ngrok instance found. shutting down'
exit
end
A Node.js solution.
Bonus: It copies the url to the clipboard in Windows, Mac and Linux1
const http = require("http");
const { execSync } = require("child_process");
const callback = (res) => {
let data = "";
res.on("data", (chunk) => (data += chunk));
res.on("end", () => {
const resJSON = JSON.parse(data);
const tunnels = resJSON.tunnels;
const { public_url: url } = tunnels.find(({ proto }) => proto === "https");
console.log(url);
// Copy to clipboard
switch (process.platform) {
case "win32":
execSync(`echo ${url} | clip`);
break;
case "darwin":
execSync(`echo ${url} | pbcopy`);
break;
case "linux":
// NOTE: this requires xclip to be installed
execSync(`echo ${url} | xclip -selection clipboard`);
break;
default:
break;
}
});
};
http.get("http://localhost:4040/api/tunnels", callback);
[1] You need to install xclip first:
sudo apt-get install xclip
If you're using nodejs I did this
const getURL = async () => {
// inspect if the callback is working at: http://127.0.0.1:4040/inspect/http
const ngrok = await import('ngrok')
const api = ngrok.getApi();
const { tunnels } = JSON.parse(await api?.get('api/tunnels') ?? '{}')
// if we already have a tunnel open, disconnect. We're only allowed to have 4
if (tunnels?.length > 0) await ngrok.disconnect()
return await ngrok.connect(3000)
}
May be I'm a little too late in answering but would be glad if it is helpful for anyone visiting the question.
***Above answers are solutions to see/check the redirection URL. However to run ngrok in background, you could try using screen in linux . Incase you need help here a quick reference
Steps:
1. Just run the ngrok in screen and then detach.
2. Use the python script given by Gerard above to see the URL.
I have followed the same process and it works!
There is a better way to do that just login to your account on ngrok.com.
Your URL will be in your dashboard.

Socket.io doesn't set CORS header(s)

I know this question has been asked a couple of times.
However, I can't get any those solutions to work.
I'm running a standard install of node.js and socket.io. (From yum on Amazon EC2)
The problem is that Chrome is falling back to xhr polling, and those requests require a working CORS configuration. However, I can't seem to get it to work. My web server is running on port 80, and node.js(socket.io) is running on port 81. I have tried to get socket.io to use a origin policy as you can see. I have also tried to use "* : *" as origin with no luck.
Here's my code:
var http = require('http');
var io = require('socket.io').listen(81, {origins: '*'});
io.configure( function(){
io.set('origin', '*');
});
io.set("origins","*");
var server = http.createServer(function(req, res) {
io.sockets.emit("message", "test");
res.writeHead(200);
res.end('Hello Http');
console.log("Message recieved!");
});
server.listen(82);
io.sockets.on('connection', function(client) {
console.log("New Connection");
});
Thank you very much!
This is the syntax I had to use to get CORS working with socket.io:
io.set( 'origins', '*domain.com*:*' );
If it comes to it, use console.log to make sure you're entering this block of code in Manager.prototype.handleHandshake inside ./lib/manager.js:
if (origin) {
// https://developer.mozilla.org/En/HTTP_Access_Control
headers['Access-Control-Allow-Origin'] = '*';
if (req.headers.cookie) {
headers['Access-Control-Allow-Credentials'] = 'true';
}
}

Resources