I use NebulaGraph Dashboard Enterprise Edition and want to know how to renew the license after it expires?
The system will send expiration notifications before the license expires.
The notification time before the license expires is different for the full version license and the trial version license.
For the full version license:
Within 30 days before the license expires or on the day the license expires, there is an expiration reminder when NebulaGraph/Dashboard/Explorer is started.
There is a 14-day buffer period after expiration. During the buffer period, you will receive expiration notifications and can continue using NebulaGraph/Dashboard/Explorer. After the buffer period ends, the corresponding service will be down and cannot be started.
For the trial version license:
Within 7 days before the license expires or on the day the license expires, there is an expiration reminder when NebulaGraph/Dashboard/Explorer is started.
There is no buffer period after expiration. Once the license expires, the corresponding service will be down and cannot be started.
After your license expires, contact inqury#vesoft.com to renew it.
Related
The desktop application I am developing (C#/.NET, WPF) uses a feature which requires connection to IMAP- and SMTP servers of the user. I am using a package called MailKit for this. Some of our users are using Microsoft365 and will require modern authentication in the future, as opposed to the basic authentication they are using right now. This is supported by MailKit and I am able to authenticate using OAuth2.0.
However, this requires a client secret, which expires after a certain amount of time (e.g. two years) after creation in Azure. This client secret is compiled with the application, after which the application is distributed. Does this mean the users need to update their installation at least every two years, so I can supply a new client secret? This is undesirable to our users. The best solution for me would be if I could refresh expired client secrets without the user having to perform any action.
Perhaps its a good idea to force the users to upgrade the software after two years? Like forcing them to buy an upgrade (business opportunity) or as a way to distribute fixes and updates to the application?
Most applications today you do update at least every year?
I am currently working on a customized media center/box product for my employer. It's basically a Raspberry Pi 3b+ running Raspian, configured to auto-update periodically via apt. The device accesses binaries for proprietary applications via a private, secured apt repo, using a pre-installed certificate on the device.
Right now, the certificate on the device is set to never expire, but going forward, we'd like to configure the certificate to expire every 4 months. We also plan to deploy a unique certificate per device we ship, so the certs can be revoked (i.e. in case the customer reports the device as stolen).
Is there a way, via apt or OpenStack/Barbican KMS to:
Update the certs for apt-repo access on the device periodically.
Setup key-encryption-keys (KEK) on the device, if we need the device to be able to download sensitive data, such as an in-memory cached copy of customer info.
Provide a mechanism for a new key to be deployed on the device if the currently-used key has expired (i.e. the user hasn't connected the device to the internet for more than 4 months). Trying to wrap my head around this one, since the device now (in this state) has an expired certificate, and I can't determine how to let it be trusted to pull a new one.
Allow keys to be tracked, revoked, and de-commissioned.
Thank you.
Is there a way I could use Barbican to:
* Update the certs for apt-repo access on the device periodically.
Barbican used to have an interface to issue certs, but this was
removed. Therefore barbican is simply a service to generate and store
secrets.
You could use something like certmonger. certmonger is a client side
daemon that generates cert requests and submits them to a CA. It then
tracks those certs and requests new ones when the certs are going to
expire.
Setup key-encryption-keys (KEK) on the device, if we need the
device to be able to download sensitive data, such as an in-memory
cached copy of customer info.
To use barbican, you need to be able to authenticate and retrieve
something like a keystone token. Once you have that, you can use
barbican to generate key encryption keys (which would be stored in the
barbican database) and download them to the device using the secret
retrieval API.
Do you need/want the KEK's escrowed like this though?
Provide a mechanism for a new key to be deployed on the device if
the currently-used key has expired (i.e. the user hasn't connected
the device to the internet for more than 4 months).
Barbican has no mechanism for this. This is client side tooling that
would need to be written. You'd need to think about authentication.
Allow keys to be tracked, revoked, and de-commissioned.
Same as above. Barbican has no mechanism for this.
My squad is building a Grafana dashboard that uses a custom datasource plugin https://github.com/vistaprint/Application-Insights-Datasource-Plugin which queries Azure Application Insights APIs.
We have discovered that there is a limit on max number of API requests https://dev.applicationinsights.io/documentation/Authorization/Rate-limits:
* Throttling limit: no more than 15 requests can be made in a minute across all API paths (/metrics, /events and /query), and
* Daily cap: no more than 1500 requests per day (UTC day) per API key can be made across all API paths.
We're worried that 1500 requests per day may not be enough, especially if the dashboard is refreshed frequently and accessed by many users. We wonder if there is a way to increase that limit.
the next section of the article you linked states:
Limits when using Azure Active Directory authentication
If using the
Azure API and Azure Active Directory for per user authentication, the
throttling limit is 60 requests per minute per user and there is no
daily cap. [emphasis added]
if you need higher than what the API Key auth allows, you'll need to look at using AAD auth instead.
We are implementing a chat infrastructure with using ejabberd-16.08 and we've decided to use mod_interact(https://github.com/adamvduke/mod_interact) for sending request to our webservers while the receipant user is offline (so we can send them push notifications)
However when I integrate mod_interact with ejaberd and send a message to one of my offline friends in my roster, I saw that mod_interact sends mod_unavailable message instead of mod_offline message. (I want mod_interact to send mod_offline message because only mod_offline has the proper information to send push notifications)
So I wonder whats the difference between beeing online and beeing unavailable and how can we set that.
P.S: The user I'm trying to send message(Which seems unavailable) was disconnected from server(not specificaly set his/her presence to unavailable)
Thanks
When the user gets offline means he is disconnected form the server and unavailable behaviour is same as offline .if you want to customize the behaviour of presence unavailable you can. You can visit here to know more.
In XMPP there's nothing called offline.
User status could be,
unavailable -- Signals that the entity is no longer available for communication.
subscribe -- The sender wishes to subscribe to the recipient's presence.
subscribed -- The sender has allowed the recipient to receive their presence.
unsubscribe -- The sender is unsubscribing from another entity's presence.
unsubscribed -- The subscription request has been denied or a previously-granted subscription has been cancelled.
probe -- A request for an entity's current presence; SHOULD be generated only by a server on behalf of a user.
error -- An error has occurred regarding processing or delivery of a previously-sent presence stanza.
unavailable means user gone offline. But if an online user set custom status as unavailable you also receive unavailable status but in this case user is actually online.
Note : You can use probe to get user's actual status.
After reading this blog post on OpenCPU, I have questions about Sessions:
* when/how do sessions expire?
* can session expire time be configured on the server?
* can session expire time be changed at runtime?
* are sessions saved on-disk or in-memory?
* do sessions work with the nginx opencpu proxy?
Thanks in advance!
Probably better suited for the mailing list. Also have a look at the paper for some of these topics.
When/how do sessions expire?
The default expiration for temporary sessions in the server implementation is 24h.
Can session expire time be configured on the server?
You could edit the /usr/lib/opencpu/scripts/cleanocpu.sh script, which get triggered through /etc/cron.d/opencpu. But if you want persistence it is usually better to store things in a database (RMySQL, mongolite, etc) or in a package on the server, or in the client.
Can session expire time be changed at runtime?
No, expiration of resources is up to the server.
Are sessions saved on-disk or in-memory?
The current implementation saves on disk (with a bit of in-memory cache), but the API is agnostic.
Do sessions work with the nginx opencpu proxy?
Yes, they are no different than anything else on the server.