I am firing Calibre(v 2013.4_37.29) runs from a server.
But I see that, there are multiple licenses checked out by my name and few are getting queued, when I fire runs.
On a deeper look I see that there are multiple mgls_async processes running which might be checking out the licenses. When I kill them, I am able to free up the licenses.
Any idea whats going on? Any help is greatly appreciated.
Thanks in advance!
'Calibre' is a commercial name. In Calibre you have many functions or stage. It could be have no link between a commercial name and the resources needed to fulfil each stage.
'Calibre' use a licenses service from you own computer or from a specific computer on your network. The licences service use a specific file given by Mentor.
The structure and the contents of the licences file has no relationship with the a commercial name, your are in this case. And for each stage of your work with Calibre, you use (without know it probably) some specific 'features'.
At a stage, some resources are associated. In the license file that you don't see, a resource could be : 'INCREMENT', 'FEATURE' or 'PACKAGE'. For example, if a resource is defined as a 'PACKAGE', when you use it, you get one or more tokens from the 'INCREMENT' resources that compose the 'PACKAGE' : you use simultaneously a (or more than one) token from many 'INCREMENT'.
So, at each stage, you have many requests to the licenses server and many process could be launched to fulfil the stage in your use of 'Calibre'.
It's the editor of the software that decide how each function or stage is split in 'INCREMENT', 'FEATURE' or 'PACKAGE'.
The licenses service is composed of two daemons : 'lmgrd' and what is called the 'daemon vendor'. When the soft (for you 'Calibre') must use a resource, it sends a request to 'lmgrd'. 'lmgrd' make some controls and pass the request to the 'daemon vendor'. At this moment, I can't check it, but I know that the name of the 'daemon vendor' of Mentor is something like 'mgls...'.
This a rough description of the use of a software associated with a licenses server of type 'Flexnet' (the old name is 'Flexlm') from the Flexera company.
To fulfil a stage, in function of the software implementation, one or many process could launched, in your case with your login for owner.
As you as seen, when you kill your process, the token(s) associated to each process are freed.
But in a normal situation, when the stage is fulfil, the process disappeared and the token are freed without any intervention.
Related
dose asterisk have any solution to access the recorded calls by itself , i mean no tricky way like saving recorded calls to DB or using Ftp or other third party solutions ...
looking for built in solution.
No.
But you can build a macro that will play the recorded calls
Asterisk does not provide a built in mechanism to access recorded calls (via Monitor or MixMonitor). All calls will be recorded to /var/spool/asterisk/monitor (unless specified differently).
You can check out a project called ARI (Asterisk Recording Interface), not to be confused with Asterisk REST Interface. It should provide you with the solution you are seeking.
Pay attention to one thing - recording everything into a single directory will end up with thousands of files in one directory. Depending on your file system, ext3/4 for example, may not really like the idea that there are 10,000 files in one directory.
My suggestion to you is this, if you are building a recording system for an enterprise, develop a catalog mechanism with a database, so that you don't have to run daily archiving and sh**ty procedures to keep your system working.
I'm having difficulty conceptualising a requirement I have into something that will fit into our nascent SOA/EDA
We have a component I'll call the Data Downloader. This is a facade for an external data provider that has both high latency and a cost associated with every request. I want to take this component and create a re-usable service out of it with a clear contract definition. It is up to me to decide how that contract should work, however its responsibilities are two-fold:
Maintain the parameter list (called a Download Definition) for an upcoming scheduled download
Manage the technical details of the communication to the external service
Basically, it manages the 'how' of the communication. The 'what' and the 'when' are the responsibilities of two other components:
The 'what' is managed by 'Clients' who are responsible for
determining the parameters for the download.
The 'when' is managed by a dedicated scheduling component. Because of the cost associated with the downloads we'd like to batch the requests intraday.
Hopefully this sequence diagram explains the responsibilities of the services:
Because each of the responsibilities are split out in three different components, we get all sorts of potential race conditions with async messaging. For instance when the Scheduler tells the Downloader to do its work, because the 'Append to Download Definition' command is asynchronous, there is no guarantee that the pending requests from Client A have actually been serviced. But this all screams high-coupling to me; why should the Scheduler necessarily know about any 'prerequisite' client requests that need to have been actioned before it can invoke a download?
Some potential solutions we've toyed with:
Make the 'Append to Download Definition' command a blocking request/response operation. But this then breaks the perf. and scalability benefits of having an EDA
Build something in the Downloader to ensure that it only runs when there are no pending commands in its incoming request queue. But that then introduces a dependency on the underlying messaging infrastructure which I don't like either.
Makes me think I'm thinking about this problem in a completely backward way. Or is this just a classic case of someone trying to fit a synchronous RPC requirement into an async event-driven architecture?
The thing I like most about EDA and SOA, is that it almost completely eliminates the notion of race condition. As long as your events are associated with some association key (e.g. downloadId), the problem you describe can be addressed with several solutions of different complexities - depending on your needs. I'm not sure I totally understand the described use-case but I will try my best
Out of the top of my head:
DataDownloader maintains a list of received Download Definitions and a list of triggered downloads. When a definition is received it is checked against the triggers list to see if the associated download has already been triggered, and if it was, execute the download. When a TriggerDownloadCommand is recieved, the definitions list is checked against a definition with the associated downloadId.
For more complex situation, consider using the Saga pattern, which is implemented by some 3rd party messaging infrastructures. With some simple configuration, it will handle both messages, and initiate the actual download when the required condition is satisfied. This is more appropriate for distributed systems, where an in-memory collection is out of the question.
You can also configure your scheduler (or the trigger command handler) to retry when an error is signaled (e.g. by an exception), in order to avoid that race condition, and ultimately give up after a specified timeout.
Does this help?
it seems that starting kernel 2.2, they introduced the concept of Capabilities. According to the unix man page on capabilities, it says if you're not a root user, you can grant yourself of capabilities by calling cap_set_proc per thread basis. So does this mean that if you're writing a malware for unix, do you just grant yourself bunch of capabilities and compromise the system? If not, how does one grant capabilities required to run the program?
it seems that Unix's security model is quite flawed primitive. Am I getting this right?
I'll go more specific:
How do you (when running as a non-root user) send a signal to another process that is running under different user? On signal man page, it says you need CAP_KILL capability to perform this. However, reading the capabilities man page, I'm not sure how I can grant a process that capability.
From man cap_set_proc:
Please note, by default, the only processes that have CAP_SETPCAP available to them are processes started as a kernel-thread. (Typically this includes init(8), kflushd and kswapd). You will need to recompile the kernel to modify this default.
Trust me if it was that easy I'm sure someone would have exploited it by now. Unix's security model may be simple by comparison to other operating systems, but it doesn't mean it's "flawed".
it's impossible. Use Socket or File instead.
I’m a complete newbie at BizTalk and I need to create a BizTalk 2006 application which broadcasts messages in a specific way. I’m not asking for a complete solution, but for advise and guidelines, which capabilities of BizTalk I should use.
There’s a message source, for simplicity, say, a directory where the user adds files to publish them. There are several subscribers, each having a directory to receive published files. The number of subscribers can vary in the course of exploitation of the program. There are also some rules which determine if a particular subscriber needs to receive a particular file, based on the filename. For example, each subscriber has a pattern or mask of filename which files they receives must match. Those rules (for example, patterns) can change in time as well.
I don’t know how to do this. Create a set of send ports at runtime, each for each destination? Is it possible? Use one port changing its binding? Would it work correctly with concurrent sendings? Are there other ways?
EDIT
I realized my question may be to obscure and general to prefer one answer over another to accept. So I just upvoted them.
You could look at using dynamic send ports to achieve this - if your subscribers are truly dynamic. This introduces a bit of complexity since you'll need to use an orchestration to configure the send port's properties based on your rules.
If you can, try and remove the complexity. If you know that you don't need to be truly dynamic when adding subscribers (i.e. a subscriber and it's rules can be configured one time only) and you have a manageable number of subscribers then I would suggest configuring each subscriber using it's own send port and use a filter to create subscriptions based on message context properties. The beauty of this approach is that you don't need to create and deploy an orchestration and this becomes a highly performant and scalable solution.
If the changes to the destination are going to be frequent, you are right in seeking a more dynamic solution. One nice solution is using dynamic send ports and the Business Rules Engine. You create rule set for the messages you are receving. This could be based on a destination property or customer ID in the message. Using these facts, the rules engine can return a bunch of information like file mask, server name, ip address of deleiver server, etc. You can thenuse this information to configure the dynamic send in the orchestration. The real nice thing here is that you can update the rule set in the rules engine without redeploying the whole solution. As a newb, these are some advanced concepts, but not as diificult as you may think.
For a simpler solution, you might want to look at setting the FILE Send adapters properties via it's Propery Schema (ie. File name, Directory, etc.). You could pull these values from a database with a helper class inside an expresison shape. On each message ogig out, use the property shcema to set where the message will be sent and named. This way, you just update the database as things change.
Good Luck!
Recently, I've been reading up on the IRC protocol (RFCs 1459, 2810-2813), and I was thinking of implementing my own server.
I'm not necessarily looking into adhering religiously to the IRC protocol (I'm doing this for fun, after all), but one of the things I do like about it is that a network can consist of multiple servers transparently.
There are a number of things I don't like about the protocol or the IRC specification. The first is that nicknames aren't owned. While services like NickServ exist, they're not part of the official protocol. On the other hand, implementing something like NickServ properly kind of defeats the purpose of distribution (i.e. there'd be one place where NickServ is running, and one data store for it).
I was hoping there'd be a way to manage nicknames on a per-server basis. The problem with this is that if you have two servers that have some registered nicknames, and they then link up, you can have collisions.
Is there a way to avoid this, without using one central data store? That is: is it possible to keep the servers loosely connected (such that they each exist as an independent entity, but can also connect to one another) and maintain uniqueness amongst nicknames?
I realize this question is vague, but I can't think of a better way of wording it. I'm looking more for suggestions than I am for actual yes/no answers. So if anyone has any ideas as to how to accomplish nickname uniqueness in a network while still maintaining server independence, I'd be interested in hearing it. Note that adhering strictly to the IRC protocol isn't at all necessary; I've got no problem changing things to suit my purposes. :)
There's a simple solution if you don't care about strictly implementing an IRC server, but rather implementing a distributed message system that's like IRC, but not exactly IRC.
The simple solution is to use nicknames in the form "nick#host", much like email. So instead of merely being "mipadi", my nickname could be "mipadi#free-memorys-server.net". So I register with just your server, but when your server links up with others to form another a big ole' chat network, you can easily union all the usernames together. There might be a "mipadi" on otherserver.net, but then our nicknames become "mipadi#free-memorys-server.net" and "mipadi#otherserver.net", and everything is cool.
Of course, this deviates a good deal from IRC. :)
They have to be aware of each other. If not, you cannot prevent the sharing of nicknames. If they are, you simply need to transfer updates on the back-end. To prevent simultaneous registrations, you need a transaction system that blocks, requests permission from all other servers, and responds.
To prevent simultaneous registrations during outages, you have no choice but to timestamp the registration, and remove all but the last (or a random for truly simultaneous) registered copy of the nick.
It's not very pretty considering these servers aren't initially merged in the first place.
You could still implement nick ownership without a central instance, if your server instances trust each other.
When a user registers a nick, it is registered with the current server he's connected with
When a server receives a registration that it didn't know of, it forwards that information to all other servers that don't know it yet (might need a smart algorithm to avoid spamming the network)
When a server re-connects to another server then it tries to synchronize the list of registered nicks and which server handles which nick
If there is a collision during that sync, then the older registration is used, and the newer one marked as invalid
If you can't trust your servers, then it'll get a lot harder, as a servers could easily claim every username and even claim the oldest registration for each one.
Since you are trying to come up with something new, the idea that springs to mind, is simply including something unique about the server as part of the nick name when communicating outside of the server. So if you want to message a user on a different server you might have something like user#server
If you don't need them to be completely separate you might want to consider creating some kind of multiple-master replicated database of accounts. Where each server stores a complete copy of the account database, and each server can create new accounts which will be replicated to other servers as possible. You'll probably still have to deal with collisions on occasion though.
While services like NickServ exist, they're not part of the official protocol.
Services are not part of the official protocol because they've nothing to do with the protocol. They're bots with permissions. There's no reason why you couldn't have one running on each server but it does make them harder to maintain.
If you were to go down that path, I would probably suggest the commonly used "multiple master" database replication technique. If one receives a write (in your case, a new user is created or updated, etc) it sends the data to all the other nodes. You'll have to be careful though. If one node is offline when the others get an update, it will need to know to resync on reconnection.
Another technique would be as above but in reverse. Data is only exchanged between nodes when it's needed. Eg if a user tries to log in on a node where there's no data for it, it'll query the others and issue a move order to get all the data to that one node. This is potentially less painful than the replication version but there could be severe problems in netsplits if somebody signs up on a node disconnected from the pack for a duplicate nick.
One technique to nullify the problems of netsplits would be to make chat nodes and their bots netsplit-aware. When they're split, they probably shouldn't allow any write actions... But this could impact on your network if you're splitting lots.
You've also got to ask how secure this might or might not be. IRC network nodes are distributed for performance but they're not "secure". Because of this, service bots are usually run centrally to keep ultimate control over their running. If you distributed the bots and remote node got hacked, they'd potentially have access to the whole user database (depending on the model).