How to list all active rooms in Janus with SFU plugin? - aframe

I'm trying to build a simple Aframe chat hub like Mozilla Hubs using networked-aframe with janus as adapter. I've installed everything in one server and everything is working fine.
However, I'm trying to set a limit for max amount of users to be connected in one 'room' because otherwise the browser might crash because of too many avatars to render, then redirect new users to connect to a new randomly generated room automatically.
Is there a way to do so by using the available Janus API? So far I've tried this Janus Signalling API for SFU plugin because it's the only reference that mention how to get how many users in one room, although not directly.
The example request body:
{
"kind": "join",
"room_id": room ID,
"user_id": user ID,
"subscribe": [none|subscription object]
}
The example result, but I think it should not be done like this to achieve what I want because I need list of ALL rooms, not just 1 room:
{
"success": true,
"response": {
"users": {room_alpha: ["123", "789"]}
}
}

Related

Is this login form with possible with WebAuthn?

I'm trying to plan a rewrite of my website and I want to make it that I can login passwordless with just Windows Hello, TouchID, or FaceID using WebAuthn. All the examples online have a whole popup situation but I want it done like my mockup. I also want my website to detect the default biometric and have the biometric icon change to the icon representing the default one, for example, face icon for FaceID. This website will be done using python-flask, ReactJS, MySQL, CSS, and HTML.
There's a few different points to hit on here -
Pop-up/Modal
We'll start with this one. Unfortunately the pop-ups that appear during the WebAuthn ceremony are part of the browsers implementation. Every time the get()/create() methods are called the pop-ups will be invoked. There is some work coming out from Google/Apple in their passkey implementation where this will look more like an "autofill" experience, but you will still be required to use their pop-ups.
Defaulting to Windows Hello, Touch ID, etc..
I'll start by suggesting that you shouldn't constrain your users to only the platform authenticators. Security keys still play a big role in WebAuthn and work really well for signing in across devices. Relying on platform authenticators could limit your users to the device they initially registered with, or limit users who don't have a biometric sensor on their device.
With that being said, you can explicitly invoke the use of only platform authenticators using the PublicKeyCreationOptions. In the property authenticatorSelection there is a field authenticatorAttachment. If you set this field to "platform" then your platform authenticator will be invoked (if one is available).
Here's an example of the request sent by the relying party (note the property authenticatorSelection towards the bottom):
{
"publicKey": {
"rp": {
"name": "Example Inc",
"id": "example.com/"
},
"user": {
"name": "user",
"displayName": "user",
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
},
"challenge": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"pubKeyCredParams": [***],
"excludeCredentials": [***],
"authenticatorSelection": {
"authenticatorAttachment": "platform"
"residentKey": "preferred",
"userVerification": "preferred"
},
"attestation": "direct",
"extensions": {}
}
}
Detecting default biometric
I have a React example here. Some things to note on this approach:
There are more elegant and accurate ways of determining what platform the user is on. This snippet will work a majority of the time, but there is a lot of assumption happening based only on the detected OS
There's no icons included, I would suggest adding an imgSrc field to the enums that includes a link to the source image
Hope this helps.

Cosmos Db library Microsoft.Azure.DocumentDB.Core (2.1.0) - Actual REST invocations

We are attempting to Wiremock (https://github.com/WireMock-Net/WireMock.Net) CosmosDb invocations - so we can build integrationtests in our .net core 2.1 microservice.
By looking at the WireMock instance Request/Response entries, we can observe the following:
1) GET towards "/"
We mock the returning metadata of databases
THIS IS OK
2) GET towards collection (in our case: "/dbs/Chunker/colls/RHTMLChunks")
Returns metadata about the collections
THIS IS OK
3) POST a Query that results in one document being returned towards the documents endpoint on the collection (in our case: "/dbs/Chunker/colls/RHTMLChunks/docs")
I have tried to emulate what we get when we do the exact same query towards the CosmosDb instance in Postman, including headers and response.
However I observe that the lib does the query again, and again, and again....
(I can see this by pausing in Visual Studio, then look at the RequestLog in WireMock)
Does anyone know what should be returned. I have set up WireMock to return the following json payload:
{
"_rid": "q0dcAOelSAI=",
"Documents": [
{
"id": "gL20020621z2D34-1",
"ChunkSize": 658212,
"TotalChunks": 2,
"Metadata": {
"Active": true,
"PublishedDate": "",
},
"ChunkId": 1,
"Markup": "<h1>hello</h1>",
"MainDestination": "gL20020621z2D34",
"_rid": "q0dcAOelSAIHAAAAAAAAAA==",
"_self": "dbs/q0dcAA==/colls/q0dcAOelSAI=/docs/q0dcAOelSAIHAAAAAAAAAA==/",
"_etag": "\"0100e92a-0000-0000-0000-5ba96cf70000\"",
"_attachments": "attachments/",
"_ts": 1537830135
}
],
"_count": 0
}
Problems:
1) Can not find .pdb belonging to Microsoft.Azure.DocumentDB.Core v2.1.0
2) What payload/headers should be returned, so the library will NOT blow up, and retry when we invoke:
var response = await documentQuery.ExecuteNextAsync<DocumentDto>(); // this hangs forever
Please help :)
We're working on open sourcing the C# code base and some other fun improvements to make this easier. In the mean time, I'd advocate for using the emulator for local testing/etc., although I understand mocking is still a lot faster an nicer - it'll just be hard :)
My best pointer is actually our Node.js code base since that's public already. The query code is relatively hard to follow, but basically, you create a query, we look up all the partitions we need to talk to, then we send a request for each partition and keep querying until we don't get back a continuation token anymore (or maxBufferedItem Count/etc. goes over the limit, and we pause until goes back down, etc.)
Effectively, we send out N number of requests for each partition, where N is the number of pages of results, and can vary per partition and query. You'd likely be able to mock a single partition, single page response relatively easy, but a full partition response isn't gonna be fun.
As I mentioned in the beginning, we've got some cool stuff coming, hopefully before the end of the year, which will make offline mocking easier, as well as open sourcing it finally. You might be better off with the emulator until then.

Using webhooks with Google Analytics

I'm trying to integrate my CRM with Google Analytics to monitor lead changes (from lead to sell) and so on. As I understood, I need to use Google Measurement Protocol, to receive webhooks from CRM and translate it to Analytics Conversions.
But in fact, I don't really understand how to do it. I need to make some script, to translate webhook code to analytics, but where I need to place that script? Are there some templates? And so on.
So, If you know some tutorials/courses/freelancers to help me with intergrating webhooks with Analytics - I need your advice.
Example of webhook from CRM:
{
"leads": {
"status": {
"id": "25399013",
"name": "Lead title",
"old_status_id": "7039101",
"status_id": "142",
"price": "0",
"responsible_user_id": "102525",
"last_modified": "1413554372",
"modified_user_id": "102525",
"created_user_id": "102525",
"date_create": "1413554349",
"account_id": "7039099",
"custom_fields": [
{
"id": "427183",
"name": "Checkbox custom field",
"values": ["1"]
},
{
"id": "427271",
"name": "Date custom field",
"values": ["1412380800"]
},
{
"id": "1069602",
"name": "Checkbox custom field",
"values": ["0"]
},
{
"id": "427661",
"name": "Text custom field",
"values": ["Валера"]
},
{
"id": "1075272",
"name": "Date custom field",
"values": ["1413331200"]
}
]
}
}
}
"Webhook" is a fancy way of saying that your CRM can call a web based service whenever something interesting happens (i.e. the CRM can "hook" into a web based application). E.g. if a new lead is created you can call an url with the lead details as parameters.
Specifics depend on your CRM, but when you set up a webhook there should be a field to set a url; the script that evaluates the CRM data is located at the URL.
You have that big JSON thing as your example - No real way to tell without knowing your system, but I assume that is sent as request body. So in your script you evaluate the request body, extract the parameters you want to send to analytics (be mindful that you are not allowed to store personally identifiable information) and sent it via the measurement protocol as described in the documentation linked in the other answer.
Depending on the system you might even be able to call the measurement protocol without having a custom script in between (after all the measurement protocol is an url with a few parameters).
This is an awfully generic answer, but then the question is really broad.
I've done just this in my line of work.
You need to first decide your data model on how you would like the CRM data to look within Google Analytics. This could be just mapping Google Analytics' event category, event label, event action to your data, or perhpas using custom dimensions and metrics.
Then to make it most useful, you would like to be able to link the CRM activity of a customer to their online activity. You can do this if they login online. In that case, you can set the cid and/or uid of the user to your CRM id.
Then, if you send in a GA hit with the same cid/uid in your Measurement Protocol hit, you will link the online sessions with your offline CRM activity.
To make the actual record hit Google Analytics, you will need to program something that takes the CRM data and turns it into a Measurement Protocol hit, which is essentially just a URL with the correct parameters. Look here for reference: https://developers.google.com/analytics/devguides/collection/protocol/v1/reference
An example could be: http://www.google-analytics.com/collect?v=1&tid=UA-123456-1&cid=5555&t=pageview&dp=%2FpageA
We usually have this as a seperate process, that fires when the CRM data is written to its database (the webhook in your example). If its a lot of data, you should probably implement checks to see if the hit was sucessful, and caching in case the service is not online - you have an optional parameter that gives you 4 hours leeway in sending data.
Hope this gets you at least started.

Meteor utilities:avatar data

I'd like to use the utilities:avatar package, but I'm having some major reservations.
The docs tell me that I should publish my user data, like this:
Meteor.publish("otherUserData", function() {
var data = Meteor.users.find({
}, {
fields : {
"services.twitter.profile_image_url_https" : true,
"services.facebook.id" : true,
"services.google.picture" : true,
"services.github.username" : true,
"services.instagram.profile_picture" : true
}
});
return data;
});
If I understand Meteor's publish/subscribe mechanism correctly, this would push these fields for the entire user database to every client! Clearly, this is not a sustainable solution. Equally clearly, however, either I am doing something wrong, or I am understanding something wrong.
Also: This unscalable solution works fine in a browser, but no avatar icons are visible when the app is deployed to a mobile device, for some reason.
Any ideas?
Separate the issue of which fields to publish from which users you want to publish data on.
Presumably you want to show avatars for other users that the current user is interacting with. You need to decide what query to use in
Meteor.users.find(query,{fields: {...}});
so that you narrow down the list from all users to just pertinent ones.
In my app I end up using reywood:publish-composite to publish the users that are related to the current user via an intermediate collection.
The unscalability of utilities:avatar seems, as far as I can tell, to be a real issue, and there isn't much to be done about it except to remove utilities:avatar and rewrite the avatar URL-fetching code by hand.
As for the avatars not appearing on mobile devices, the answer was simply that we needed to grant permission to access remote URLs in mobile-config.js, like this:
App.accessRule("http://*");
App.accessRule("https://*");

Most efficient method of pulling in weather data for multiple locations

I'm working on a meteor mobile app that displays information about local places of interest and one of the things that I want to show is the weather in each location. I've currently got my locations stored with latlng coordinates and they're searchable by radius. I'd like to use the openweathermap api to pull in some useful 'current conditions' information so that when a user looks at an entry they can see basic weather data. Ideally I'd like to limit the number of outgoing requests to keep the pages snappy (and API requests down)
I'm wondering if I can create a server collection of weather data that I update regularly, server-side (hourly?) that my clients then query (perhaps using a mongo $near lookup?) - that way all of my data is being handled within meteor, rather than each client going out to grab the latest data from the API. I don't want to have to iterate through all of the locations in my list and do a separate call out to the api for each as I have approx. 400 locations(!). I'm afraid I'm new to API requests (and meteor itself) so apologies if this is a poorly phrased question.
I'm not entirely sure if this is doable, or if it's even the best approach - any advice (and links to any useful code snippets!) would be greatly appreciated!
EDIT / UPDATE!
OK I haven't managed to get this working yet but I some more useful details on the data!
If I make a request to the openweather API I can get data back for all of their locations (which I would like to add/update to a collection). I could then do regular lookup, instead of making a client request straight out to them every time a user looks at a location. The JSON data looks like this:
{
"message":"accurate",
"cod":"200",
"count":50,
"list":[
{
"id":2643076,
"name":"Marazion",
"coord":{
"lon":-5.47505,
"lat":50.125561
},
"main":{
"temp":292.15,
"pressure":1016,
"humidity":68,
"temp_min":292.15,
"temp_max":292.15
},
"dt":1403707800,
"wind":{
"speed":8.7,
"deg":110,
"gust":13.9
},
"sys":{
"country":""
},
"clouds":{
"all":75
},
"weather":[
{
"id":721,
"main":"Haze",
"description":"haze",
"icon":"50d"
}
]
}, ...
Ideally I'd like to build my own local 'weather' collection that I can search using mongo's $near (to keep outbound requests down, and speed up), but I don't know if this will be possible because the format that the data comes back in - I think I'd need to structure my location data like this in order to use a geo search:
"location": {
"type": "Point",
"coordinates": [-5.47505,50.125561]
}
My questions are:
How can I build that collection (I've seen this - could I do something similar and update existing entries in the collection on a regular basis?)
Does it just need to live on the server, or client too?
Do I need to manipulate the data in order to get a geo search to work?
Is this even the right way to approach it??
EDIT/UPDATE2
Is this question too long/much? It feels like it. Maybe I should split it out.
Yes easily possible. Because your question is so large I'll give you a high level explanation of what I think you need to do.
You need to create a collection where you're gonna save the weather data in.
A request worker that requests new data and updates the collection on a set interval. Use something like cron-tick for scheduling the interval.
Requesting data should only happen server side and I can recommend the request npm package for that.
Meteor.publish the weather collection and have the client subscribe to that, with optionally a filter for it's location.
You should now be getting the weather data on your client and should be able to get freaky with it.

Resources