How to use TURN Server in Ant Media Server? - ant-media-server

We have issues with using Ant Media server due to firewall restrictions with many customers. If I use the TURN server, Can I solve this problem (for publish/play)? We are using the Conference Room type in our code.
Thanks.

You can enable TURN server for publishing and playing pages. For instance on your publishing page(/usr/local/antmedia/webapps/WebRTCAppEE/index.html), there is pc_config JavaScript variable like this
var pc_config = {
'iceServers' : [ {
'urls' : 'stun:stun.l.google.com:19302'
} ]
};
You can just change its value according to your TURN configuration like below
var pc_config = {
iceServers: [
{ urls: "turn:{TURN_SERVER_URL}",
username: "{TURN_SERVER_USERNAME}",
credential: "{TURN_SERVER_PASS}",
}
]
};
UPDATE
Ant Media Server v2.4.4 and later version will support adding TURN server for the server side. In order to do that, follow the instructions.
Edit your application's configuration file(`/usr/local/antmedia/webapps/{YOUR_APP_FOLDER}/WEB-INF/red5-web.properties) with your favorite text editor(vi, nano, etc.)
Add the following properties
settings.webrtc.stunServerURI=turn:WRITE_YOUR_TURN_SERVER_URL
settings.webrtc.turnServerUsername=WRITE_YOUR_TURN_SERVER_USERNAME
settings.webrtc.turnServerCredential=WRITE_YOUR_TURN_SERVER_PASSWORD
Save the file and restart the Ant Media Server
sudo service antmedia restart
You can set custom stun server to the following property settings.webrtc.stunServerURI. Just don't forget to start with stun: prefix. If you don't have username or password, you can leave the fields blank.

Related

CORS issue when calling API via Office Scripts Fetch

I am trying to make an API call via Office Scripts (fetch) to a publicly available Azure Function-based API I created. By policy we need to have CORS on for our Azure Functions. I've tried every domain I could think of, but I can't get the call to work unless I allow all origins. I've tried:
https://ourcompanydoamin.sharepoint.com
https://usc-excel.officeapps.live.com
https://browser.pipe.aria.microsoft.com
https://browser.events.data.microsoft.com
The first is the Excel Online domain I'm trying to execute from, and the rest came up during the script run in Chrome's Network tab. The error message in office Scripts doesn't tell me the domain the request is coming from like it does from Chrome's console. What host do I need to allow for Office Scripts to be able to make calls to my API?
The expected CORS settings for this is: https://*.officescripts.microsoftusercontent.com.
However, Azure Functions CORS doesn't support wildcard subdomains at the moment. If you try to set an origin with wildcard subdomains, you will get the following error:
One possible workaround is to explicitly maintain an "allow-list" in your Azure Functions code. Here is a proof-of-concept implementation (assuming you use node.js for your Azure Functions):
module.exports = async function (context, req) {
// List your allowed hosts here. Escape special characters for the regular expressions.
const allowedHosts = [
/https\:\/\/www\.myserver\.com/,
/https\:\/\/[^\.]+\.officescripts\.microsoftusercontent\.com/
];
if (!allowedHosts.some(host => host.test(req.headers.origin))) {
context.res = {
status: 403, /* Forbidden */
body: "Not allowed!"
};
return;
}
// Handle the normal request and generate the expected response.
context.res = {
status: 200,
body: "Allowed!"
};
}
Please note:
Regular expressions are needed to match the dynamic subdomains.
In order to do the origin check within the code, you'll need to set * as the Allowed Origins on your Functions CORS settings page.
Or if you want to build you service with ASP.NET Core, you can do something like this: https://stackoverflow.com/a/49943569/6656547.

keycloak starts with a new realm and some client configurations

I try to use keycloak as the authentication service in my design. In my case, when the keycloak starts, I need one more realm besides default master realm. Assuming the new agency is called "demo".
So it means when keycloak starts, it should have two realms (master and demo).
In addtion, in the realm demo, I need to configure the default client "admin-cli" to enable "Full Scope Allowed". Also need to add some buildin mapper to this client.
In this case, I wonder whether I can use something like initialization file which keycloak can load when starting ?
Or I need to use keycloak client APIs to do this operations (e.g., Java keycloak admin client)?
Thanks in advance.
You can try the following:
Create the Realm;
Set all the options that you want;
Go to Manage > Export;
Switch Export groups and roles to ON;
Switch Export clients to ON;
Export.
That will export a .json file with the configurations.
Then you can tested it be deleting your Demo Realm and:
Go to Add Realm;
Chose the .json file that was exported;
Click Create.
Check if the configurations that you have changed are still presented on the Demo Realm, if there are then it means that you can use this file to import the Realm from. Otherwise, for the options that were not persistent you will have to create them via the Admin Rest API.

Is it possible to setup a custom hostname for AWS Transfer SFTP via Terraform

I'm trying to set up an SFTP server with a custom hostname using AWS Transfer. I'm managing the resource using Terraform. I've currently got the resource up and running, and I've used Terraform to create a Route53 record to point to the SFTP server, but the custom hostname entry on the SFTP dashboard is reading as blank.
And of course, when I create the server manually throught the AWS console and associate a route53 record with it, it looks like what I would expect:
I've looked through the terraform resource documentation and I've tried to see how it might be done via aws cli or cloudformation, but I haven't had any luck.
My server resource looks like:
resource "aws_transfer_server" "sftp" {
identity_provider_type = "SERVICE_MANAGED"
logging_role = "${aws_iam_role.logging.arn}"
force_destroy = "false"
tags {
Name = ${local.product}-${terraform.workspace}"
}
}
and my Route53 record looks like:
resource "aws_route53_record" "dns_record_cname" {
zone_id = "${data.aws_route53_zone.sftp.zone_id}"
name = "${local.product}-${terraform.workspace}"
type = "CNAME"
records = ["${aws_transfer_server.sftp.endpoint}"]
ttl = "300"
}
Functionally, I can move forward with what I have, I can connect to the server with my DNS, but I'm trying to understand the complete picture.
In AWS,
When you create a server using AWS Cloud Development Kit (AWS CDK) or through the CLI, you must add a tag if you want that server to have a custom hostname. When you create a Transfer Family server by using the console, the tagging is done automatically.
So, you will need to be able to add those tags using Terraform. In v4.35.0 they added support for a new resource: aws_transfer_tag.
An example supplied in the GitHub Issue (I haven't tested it personally yet.):
resource "aws_transfer_server" "with_custom_domain" {
# config here
}
resource "aws_transfer_tag" "with_custom_domain_route53_zone_id" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:route53HostedZoneId"
value = "/hostedzone/ABCDE1111222233334444"
}
resource "aws_transfer_tag" "with_custom_domain_name" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:customHostname"
value = "abc.example.com"
}

how-to create an insecure jupyter server

Jupyter only allows access from localhost unless I do a bunch of extra security stuff. I am running my server so that it is only accessible on a local network where anyone with access is equal in trustworthiness to localhost. How do I set up a jupyter notebook server with no extra security features?
Based on your question, I expect you want this configuration (in ~/.jupyter/jupyter_notebook_config.py):
c.NotebookApp.ip = '0.0.0.0' # listen on all IPs
c.NotebookApp.token = '' # disable authentication
There are a few security features in Jupyter (as of 4.3.1). I'll go over how to disable each one, and whether/when it makes sense to disable it:
It listens only on localhost. This can be changed to all public IP addresses:
c.NotebookApp.ip = '0.0.0.0'
Listening on public IPs should generally come with enabling HTTPS and/or password or token authentication (docs). If it's all internal on a trusted network where nothing bad ever happens, you can proceed to disable other security features:
Token authentication is enabled by default. To disable it:
c.NotebookApp.token = ''
Disabling authentication means that anyone with access to the host can run code. It seems like this is what you want. You can also enable a password:
In [1]: from notebook.auth import passwd
In [2]: passwd()
Enter password:
Verify password:
Out[2]: 'sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed'
You can store this in c.NotebookApp.password.
You can also store this password in (~/.jupyter/jupyter_notebook_config.json):
{
"NotebookApp": {
"password": "sha1:67c9e60bb8b6:9ffede0825894254b2e042ea597d771089e11aed"
}
}
Jupyter also has CORS protections, to avoid other websites from being able to access this server. This means that when a user on your network visits example.com, javascript on that page cannot execute code on your notebook server. It sounds like you don't want to touch this, but if you are running a service that should be able to access the notebook server, you can add it to:
c.NotebookApp.allow_origin = 'https://your.other.host'
Finally, Jupyter 4.3.1 introduces an xsrf token, which is part of dealing with the same category of cross-site execution above. You don't need to touch this if users are only accessing the server directly, rather than through javascript on additional websites.
c.NotebookApp.disable_check_xsrf = True
A completely insecure notebook server, which is to say one where any website can run code on it, as long as a browser can connect to its host (this would include localhost or LAN if the browser is running from inside the LAN):
c.NotebookApp.ip = '0.0.0.0' # listen on all IPs
c.NotebookApp.token = '' # disable authentication
c.NotebookApp.allow_origin = '*' # allow access from anywhere
c.NotebookApp.disable_check_xsrf = True # allow cross-site requests
This might be desirable if you are aiming to make compute resources free for the world to use however they want via the notebook API.

Can I request scripts for use in a Spotify app?

I'm trying to use socket.io in my spotify app and the get request for [domain]/socket.io/socket.io.js keeps getting canceled. I've added the domain to the manifest and everything.
Thanks!
Try restarting Spotify. Your app's manifest.json file is loaded when you first view your app, and cached until you quit, even if you modify it.
Note: How external resource permissions work
In order to request external resources, your application needs to specify each domain it plans to connect to in its manifest.json file.
Add a line like this:
{
// ...
"RequiredPermissions": [ "http://*.spotify.com", "http://spotify.com", "http://test.example.com" ]
// ...
}
For the full details check out the Permissions section of the Spotify Apps API Guide.
I can add that when you use socket.io it will try to initialize Flash to check if flash is available so if you find a white box in Spotify (only in Windows), remove the swbobjects initialization in the socket.io.js on the node server.

Resources