I have my server with Baikal and nginx running and now I want to add the Frontend InfCloud (maybe know as CalDavZap or CalDavMATE).
Baikal is running at https://mydomain:202/
I have the calcard.php to access calendar and contacts. The file is accessibly under https://mydomain:202/calcard.php/
Now I want InfCloud to run under https://mydomain/cal.
User and password is test
I can already access the login page, but when logging in, following error occurs:
jquery-2.1.4.min.js:4 Refused to connect to 'https://test:test#mydomain:202/calcard.php/principals/' because it violates the following Content Security Policy directive: "default-src 'self' 'unsafe-inline' 'unsafe-eval'". Note that 'connect-src' was not explicitly set, so 'default-src' is used as a fallback.
and
[netCheckAndCreateConfiguration: 'PROPFIND https://mydomain:202/calcard.php/principals/'] code: '0' status: 'error' - see https://www.inf-it.com/infcloud/readme.txt (cross-domain setup)
These are my configurations:
InfCloud config.js
var globalNetworkCheckSettings={
href: 'https://mydomain:202/calcard.php/principals/',
timeOut: 90000,
lockTimeOut: 10000,
//checkContentType: true,
settingsAccount: true,
delegation: true,
additionalResources: [],
hrefLabel: null,
forceReadOnly: null,
ignoreAlarms: false,
backgroundCalendars: []
}
var globalUseJqueryAuth=true;
And nginx
location /cal {
root /path/to/infcloud;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
index index.html;
access_log /var/log/nginx/infcloud.access.log;
error_log /var/log/nginx/infcloud.error.log;
}
You are now running Baikal on https://mydomain:202 and infoCloud on https://mydomain:443. That is, you are running a cross-domain setup and are hitting the protection mechanisms associated with such a setup.
There are a number of ways to overcome the problem and they are fairly well explained in the readme.txt referenced in the infoCloud error message shown in your original post.
Related
I'm having issues understanding why my (session) cookie won't be set client-side. The error appearing on the devtools is the following:
This attempt to set a cookie via Set-Cookie header was blocked because its Domain attribute was invalid with regards to the current host url.
I did a bit of researching, turns out it's a domain issue since both frontend (Firebase) and backend (Cloud run) are on different domain names.
What disturbs me, is that this issue doesn't arrive when my frontend is running on localhost (even though the backend still is remote, on cloud run).
Here's the way I configured my session:
app.set('trust proxy', 1);
app.use(json());
app.use(
session({
name: '__session',
store: new RedisStore({ client: redisClient }),
secret: options.sessionSecret,
resave: false,
saveUninitialized: false,
cookie: {
secure: process.env.NODE_ENV === 'PROD' ? true : 'auto',
httpOnly: true,
maxAge: 1000 * 60 * 60 * 24 * 7,
sameSite: process.env.NODE_ENV === 'PROD' ? 'none' : 'lax',
domain: '<FRONTEND_URL>',
},
})
);
I feel like the domain property is incorrect, yet I provided the frontend domain, the backend domain and the backend's root domain (run.app)
Am I missing something here? Or maybe misunderstanding something?
EDIT:
As you can see, Secure; SameSite=None is provided in the cookie.
I created a simple Vue3 app, and I'm trying to call another local API (on a different port) on my machine. To better replicate the production server environment, I'm making a call to a relative API path. That means I need to use a proxy on the vite server to forward the API request to the correct localhost port for my local development. I defined my vite proxy like this in my vite.config.ts file:
import { fileURLToPath, URL } from "node:url";
import { defineConfig } from "vite";
import vue from "#vitejs/plugin-vue";
import basicSsl from '#vitejs/plugin-basic-ssl'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [
basicSsl(),
vue()
],
resolve: {
alias: {
"#": fileURLToPath(new URL("./src", import.meta.url)),
},
},
server: {
https: true,
proxy: {
'/api': {
target: 'https://localhost:44326', // The API is running locally via IIS on this port
changeOrigin: true,
rewrite: (path) => path.replace(/^\/api/, '') // The local API has a slightly different path
}
}
}
});
I'm successfully calling my API from the Vue app, but I get this error in the command line where I'm running the vite server:
5:15:14 PM [vite] http proxy error:
Error: self signed certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:526:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12)
I already tried to add the basic ssl package, and I don't particularly want to install the other NPM package that is in the top voted answer. Why does the vite server complain about a self signed certificate when I'm trying to call another API on my local machine? What can I do to fix this?
you could try secure: false
server: {
https: true,
proxy: {
'/api': {
target: 'https://localhost:44326', // The API is running locally via IIS on this port
changeOrigin: true,
secure: false,
rewrite: (path) => path.replace(/^\/api/, '') // The local API has a slightly different path
}
}
}
the set of full options is available at https://github.com/http-party/node-http-proxy#options
Options
httpProxy.createProxyServer supports the following options:
target: url string to be parsed with the url module
forward: url string to be parsed with the url module
agent: object to be passed to http(s).request (see Node's https agent and http agent objects)
ssl: object to be passed to https.createServer()
ws: true/false, if you want to proxy websockets
xfwd: true/false, adds x-forward headers
secure: true/false, if you want to verify the SSL Certs
toProxy: true/false, passes the absolute URL as the path (useful for proxying to proxies)
prependPath: true/false, Default: true - specify whether you want to prepend the target's path to the proxy path
ignorePath: true/false, Default: false - specify whether you want to ignore the proxy path of the incoming request (note: you will have to append / manually if required).
localAddress: Local interface string to bind for outgoing connections
changeOrigin: true/false, Default: false - changes the origin of the host header to the target URL
preserveHeaderKeyCase: true/false, Default: false - specify whether you want to keep letter case of response header key
auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
hostRewrite: rewrites the location hostname on (201/301/302/307/308) redirects.
autoRewrite: rewrites the location host/port on (201/301/302/307/308) redirects based on requested host/port. Default: false.
protocolRewrite: rewrites the location protocol on (201/301/302/307/308) redirects to 'http' or 'https'. Default: null.
cookieDomainRewrite: rewrites domain of set-cookie headers. Possible values:
false (default): disable cookie rewriting
String: new domain, for example cookieDomainRewrite: "new.domain". To remove the domain, use cookieDomainRewrite: "".
Object: mapping of domains to new domains, use "*" to match all domains.
For example keep one domain unchanged, rewrite one domain and remove other domains:
cookieDomainRewrite: {
"unchanged.domain": "unchanged.domain",
"old.domain": "new.domain",
"*": ""
}
cookiePathRewrite: rewrites path of set-cookie headers. Possible values:
false (default): disable cookie rewriting
String: new path, for example cookiePathRewrite: "/newPath/". To remove the path, use cookiePathRewrite: "". To set path to root use cookiePathRewrite: "/".
Object: mapping of paths to new paths, use "*" to match all paths.
For example, to keep one path unchanged, rewrite one path and remove other paths:
cookiePathRewrite: {
"/unchanged.path/": "/unchanged.path/",
"/old.path/": "/new.path/",
"*": ""
}
headers: object with extra headers to be added to target requests.
proxyTimeout: timeout (in millis) for outgoing proxy requests
timeout: timeout (in millis) for incoming requests
followRedirects: true/false, Default: false - specify whether you want to follow redirects
selfHandleResponse true/false, if set to true, none of the webOutgoing passes are called and it's your responsibility to appropriately return the response by listening and acting on the proxyRes event
buffer: stream of data to send as the request body. Maybe you have some middleware that consumes the request stream before proxying it on e.g. If you read the body of a request into a field called 'req.rawbody' you could restream this field in the buffer option:
'use strict';
const streamify = require('stream-array');
const HttpProxy = require('http-proxy');
const proxy = new HttpProxy();
module.exports = (req, res, next) => {
proxy.web(req, res, {
target: 'http://localhost:4003/',
buffer: streamify(req.rawBody)
}, next);
};
I am using OpenResty to generate SSL certificates dynamically.
I am trying to find out the user-agent of request before running ssl_certificate_by_lua_block and decide If I want to continue with the request or not.
I found out that ssl_client_hello_by_lua_block directive runs before ssl_certificate_by_lua_block but if I try to execute ngx.req.get_headers()["user-agent"] inside ssl_client_hello_by_lua_block I get the following error
2022/06/13 09:20:58 [error] 31918#31918: *18 lua entry thread aborted: runtime error: ssl_client_hello_by_lua:6: API disabled in the current context
stack traceback:
coroutine 0:
[C]: in function 'error'
/usr/local/openresty/lualib/resty/core/request.lua:140: in function 'get_headers'
ssl_client_hello_by_lua:6: in main chunk, context: ssl_client_hello_by_lua*, client: 1.2.3.4, server: 0.0.0.0:443
I tried rewrite_by_lua_block but it runs after ssl_certificate_by_lua_block
Are there any directive that can let me access ngx.req.get_headers()["user-agent"] and run before ssl_certificate_by_lua_block as well?
My Nginx conf for reference.
nginx.conf
# HTTPS server
server {
listen 443 ssl;
rewrite_by_lua_block {
local user_agent = ngx.req.get_headers()["user-agent"]
ngx.log(ngx.ERR, "rewrite_by_lua_block user_agent -- > ", user_agent)
}
ssl_client_hello_by_lua_block {
ngx.log(ngx.ERR, "I am from ssl_client_hello_by_lua_block")
local ssl_clt = require "ngx.ssl.clienthello"
local host, err = ssl_clt.get_client_hello_server_name()
ngx.log(ngx.ERR, "hosts -- > ", host)
-- local user_agent = ngx.req.get_headers()["user-agent"]
-- ngx.log(ngx.ERR, "user_agent -- > ", user_agent)
}
ssl_certificate_by_lua_block {
auto_ssl:ssl_certificate()
}
ssl_certificate /etc/ssl/resty-auto-ssl-fallback.crt;
ssl_certificate_key /etc/ssl/resty-auto-ssl-fallback.key;
location / {
proxy_pass http://backend_proxy$request_uri;
}
}
If someone is facing the same issue.
Here is the email group of OpenResty that helped me.
I was not thinking correctly. The certificate negotiation happens before a client send user-agent data(that comes in after the SYNACK reaches the client). So you cant save issuing the certificate in the process. Hard luck.
Once the handshake and the Client/Server Hello happens then the server has the user-agent, you can do the blocking under access_by_lua_block.
I have two servers:
NGINX (it exchanges file id to file path)
Golang (it accepts file id and return it's path)
Ex: When browser client makes request to https://example.com/file?id=123, NGINX should proxy this request to Golang server https://go.example.com/getpath?file_id=123, which will return the response to NGINX:
{
data: {
filePath: "/static/..."
},
status: "ok"
}
Then NGINX should get value from filePath and return file from the location.
So the question is how to read response (get filePath) in NGINX?
I assume you are software developer and your have full control over your application so there is no need to force square peg in a round hole here.
Different kinds of reverse proxies support ESI(Edge Side Includes) technology which allow developer to replace different parts of responce body with content of static files or with response bodies from upstream servers.
Nginx has such technology as well. It is called SSI (Server Side Includes).
location /file {
ssi on;
proxy_pass http://go.example.com;
}
Your upstream server can produce body with content <!--# include file="/path-to-static-files/some-static-file.ext" --> and nginx will replace this in-body directive with content of the file.
But you mentioned streaming...
It means that files will be of arbitrary sizes and building response with SSI would certainly eat precious RAM resources so we need a Plan #B.
There is "good enough" method to feed big files to the clients without showing static location of the file to the client.
You can use nginx's error handler to server static files based on information supplied by upstream server.
Upstream server for example can send back redirect 302 with Location header field containing real file path to the file.
This response does not reach the client and is feed into error handler.
Here is an example of config:
location /file {
error_page 302 = #service_static_file;
proxy_intercept_errors on;
proxy_set_header Host $host;
proxy_pass http://go.example.com;
}
location #service_static_file {
root /hidden-files;
try_files $upstream_http_location 404.html;
}
With this method you will be able to serve files without over-loading your system while having control over whom do you give the file.
For this to work your upstream server should respond with status 302 and with typical "Location:" field and nginx will use location content to find the file in the "new" root for static files.
The reason for this method to be of "good enough" type (instead of perfect) because it does not support partial requests (i.e. Range: bytes ...)
Looks like you are wanting to make an api call for data to run decision and logic against. That's not quite what proxying is about.
The core proxy ability of nginx is not designed for what you are looking to do.
Possible workaround: extending nginx...
Nginx + PHP
Your php code would do the leg work.
Serve as a client to connect to the Golang server and apply additional logic to the response.
<?php
$response = file_get_contents('https://go.example.com/getpath?file_id='.$_GET["id"]);
preg_match_all("/filePath: \"(.*?)\"/", $response, $filePath);
readfile($filePath[1][0]);
?>
location /getpath {
try_files /getpath.php;
}
This is just the pseudo-code example to get it rolling.
Some miscellaneous observations / comments:
The Golang response doesn't look like valid json, replace preg_match_all with json_decode if so.
readfile is not super efficient. Consider being creative with a 302 response.
Nginx + Lua
sites-enabled:
lua_package_path "/etc/nginx/conf.d/lib/?.lua;;";
server {
listen 80 default_server;
listen [::]:80 default_server;
location /getfile {
root /var/www/html;
resolver 8.8.8.8;
set $filepath "/index.html";
access_by_lua_file /etc/nginx/conf.d/getfile.lua;
try_files $filepath =404;
}
}
Test if lua is behaving as expected:
getfile.lua (v1)
ngx.var.filepath = "/static/...";
Simplify the Golang response body to just return a bland path then use it to set filepath:
getfile.lua (v2)
local http = require "resty.http"
local httpc = http.new()
local query_string = ngx.req.get_uri_args()
local res, err = httpc:request_uri('https://go.example.com/getpath?file_id=' .. query_string["id"], {
method = "GET",
keepalive_timeout = 60,
keepalive_pool = 10
})
if res and res.status == ngx.HTTP_OK then
body = string.gsub(res.body, '[\r\n%z]', '')
ngx.var.filepath = body;
ngx.log(ngx.ERR, "[" .. body .. "]");
else
ngx.log(ngx.ERR, "missing response");
ngx.exit(504);
end
resty.http
mkdir -p /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http_headers.lua" -P /etc/nginx/conf.d/lib/resty
wget "https://raw.githubusercontent.com/ledgetech/lua-resty-http/master/lib/resty/http.lua" -P /etc/nginx/conf.d/lib/resty
I'm calling this function from my asp.net form and getting following error on firebug console while calling ajax.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://anotherdomain/test.json. (Reason: CORS header 'Access-Control-Allow-Origin' missing).
var url= 'http://anotherdomain/test.json';
$.ajax({
url: url,
crossOrigin: true,
type: 'GET',
xhrFields: { withCredentials: true },
accept: 'application/json'
}).done(function (data) {
alert(data);
}).fail(function (xhr, textStatus, error) {
var title, message;
switch (xhr.status) {
case 403:
title = xhr.responseJSON.errorSummary;
message = 'Please login to your server before running the test.';
break;
default:
title = 'Invalid URL or Cross-Origin Request Blocked';
message = 'You must explictly add this site (' + window.location.origin + ') to the list of allowed websites in your server.';
break;
}
});
I've done alternate way but still unable to find the solution.
Note: I've no server rights to make server side(API/URL) changes.
This happens generally when you try access another domain's resources.
This is a security feature for avoiding everyone freely accessing any resources of that domain (which can be accessed for example to have an exact same copy of your website on a pirate domain).
The header of the response, even if it's 200OK do not allow other origins (domains, port) to access the resources.
You can fix this problem if you are the owner of both domains:
Solution 1: via .htaccess
To change that, you can write this in the .htaccess of the requested domain file:
<IfModule mod_headers.c>
Header set Access-Control-Allow-Origin "*"
</IfModule>
If you only want to give access to one domain, the .htaccess should look like this:
<IfModule mod_headers.c>
Header set Access-Control-Allow-Origin 'https://my-domain.example'
</IfModule>
Solution 2: set headers the correct way
If you set this into the response header of the requested file, you will allow everyone to access the resources:
Access-Control-Allow-Origin : *
OR
Access-Control-Allow-Origin : http://www.my-domain.example
Server side put this on top of .php:
header('Access-Control-Allow-Origin: *');
You can set specific domain restriction access:
header('Access-Control-Allow-Origin: https://www.example.com')
in your ajax request, adding:
dataType: "jsonp",
after line :
type: 'GET',
should solve this problem ..
hope this help you
If you are using Express js in backend you can install the package cors, and then use it in your server like this :
const cors = require("cors");
app.use(cors());
This fixed my issue
This worked for me:
Create php file that will download content of another domain page without using js:
<?
//file name: your_php_page.php
echo file_get_contents('http://anotherdomain/test.json');
?>
Then run it in ajax (jquery). Example:
$.ajax({
url: your_php_page.php,
//optional data might be usefull
//type: 'GET',
//dataType: "jsonp",
//dataType: 'xml',
context: document.body
}).done(function(data) {
alert("data");
});
You have to modify your server side code, as given below
public class CorsResponseFilter implements ContainerResponseFilter {
#Override
public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext)
throws IOException {
responseContext.getHeaders().add("Access-Control-Allow-Origin","*");
responseContext.getHeaders().add("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT");
}
}
You must have got the idea why you are getting this problem after going through above answers.
self.send_header('Access-Control-Allow-Origin', '*')
You just have to add the above line in your server side.
In a pinch, you can use this Chrome Extension to disable CORS on your local browser.
Allow CORS: Access-Control-Allow-Origin Chrome Extension