I have a PAM application that makes use of a particular PAM module for authentication chores. My question is, what is the best for the module to share an arbitrary string of bytes with the application?
My understanding of the PAM API is that, in general, the application must use the pam_get_item() call in order to obtain data from the module. However, the item types that can be shared are very limited, and they do not seem to accommodate for what I need - with the possible exception of the PAM_CONV item type. If anybody in this forum has experience with this kind of thing their feedback would be much appreciated.
After some experimentation I think I found the answer. In essence, the application has to define a PAM conversation callback in which a pointer to a data buffer is provided. The PAM module has to invoke the callback at the appropriate time, and copy the appropriate data into that buffer. The application will simply obtain the desired data by reading the contents of the buffer.
This will of course have to be carefully choreographed by the application, and the module may have to be modified to invoke the callback at the right time. The idea is simple though.
I am learning a lot as a result of posting to this forum, even if I don't seem to be getting any feedback when it comes to PAM-related questions.
I think you can use PAM_CONV. as it says:
The PAM library uses an application-defined callback to allow a direct
communication between a loaded module and the application. This
callback is specified by the struct pam_conv passed to pam_start(3) at
the start of the transaction.
I'm using sockets to allow users to send messages in real-time. I read the RTK-Query documentation and saw an example for a query, where I would be fetching data, as opposed to mutating/sending data.
Is there any difference in implementation between fetching and posting using streaming/sockets? I tried to understand how it works with query, and apply that to mutation, but it didn't exactly work.
I am struggling to get this to work, so any help would be appreciated.
An interface has as a requirement that we do not include an expect-100. (The documentation is assuming I will be using c# or php code to talk and has the code to not send the expect-100)
I quickly googled around a bit and found many topics on how to disable this when not using BizTalk and found multiple posts that would make me believe that BizTalk sends an expect-100 by default as well. (BizTalk Data Services: Extended to bring management functions through IUpdatable and Adding Custom HTTP Headers to messages send via HTTP Adapter.). I have had trouble in finding someone trying to disable it.
Since I have found the C# code to disable it, would a solution be to create a custom pipeline component that disables this?
This is not something I would worry about. I don't recall ever seeing Expect: 100- continue in any trace to or from BizTalk using WCF.
I will say that it is very strange that they would have a dependency on not seeing this. Either way, if WCF is sending it, you should be able to remove it with a Behavior.
You'll have to set this all up to even see if it's a problem. Here's where I say just try it and see what happens.
I want to execute a method on all connected clients from the server, except for the current client. Any way to do this without me maintaining my own list of clientids?
TIA!
You can use Clients.OthersInGroup() method.
This functionality has been proposed and discussed on the github issues list for the project, but is not yet implemented http://goo.gl/pyMwF.
For now, you will need to do your own filtering of your clients.
I saw this previous post but I have not been able to adapt the answer to get my code to work.
I am trying to filter on the term bruins and need to reference cacert.pem since for authentication on my Windows machine. Lastly, I have written a function to parse each response (my.function) and need to include this as well.
postForm("https://stream.twitter.com/1/statuses/sample.json",
userpwd="user:pass",
cainfo = "cacert.pem",
a = "bruins",
write=my.function)
I am looking to stay completely within R and unfortunately need to use Windows.
Simply, how can I include the search term(s) that I want such that the response is filtered?
Thanks in advance.
Alright, so I've looked at what you're doing, and some of what you're working on may be helped by examining the Twitter API methods, although it can be difficult to figure out how to translate some of the examples into R (via the RCurl Package).
What you're currently trying is very close to what you need to do, you simply need to change two things.
First of all, you're querying the url for the random sample of statuses. This url returns a random sample of roughly 1% of all tweets.
If you're interested in collecting only tweets about specific keywords, you want to use the filter API url: "https://stream.twitter.com/1/statuses/filter.json"
After changing that, you simply need to change your parameter from "a" to "postfields", and the parameter you'd be passing would look like: "track=bruins"
Finally, you should use the getURL function, to open a continuous stream, so all tweets with your keywords can be collected, rather than using the postForm command (which I believe is intended for HTML forms).
so your final function call should look like the following:
getURL("https://stream.twitter.com/1/statuses/filter.json",
userpwd="Username:Password",
cainfo = "cacert.pem",
write=my.function,
postfields="track=bruins")
For manipulating twitter, use the twitteR package.
library(twitteR)
searchTwitter("bruins")
You can include other parameters (like cainfo) in the call to searchTwitter, and they should get passed getForm underneath.
I don't think the Streaming API is currently included in twitteR - the search api is different (it's backward looking, whereas streaming is "current looking").
From my understanding, streaming is quite different to how lots of APIs work typically work; rather than pulling data from a web service and having a defined object returned, you're setting up a "pipe" for Twitter to push data to you and you then listen for that response.
You also need to worry about OAuth I think (which twitteR does deal with).
Is there any reason that you want to keep in R? I've used python successfully with the Streaming API and a package called tweepy to write data to a MySQL database and then use R to query and analyse the data.
Last time I checked, twitteR did not talk to the streaming API. Moreover, as far as I know, very few publicly-available Twitter Streaming API connection libraries in any language honor Twitter's recommendations on reconnecting when Streaming disconnects / throws an error.
My recommendation is to access Streaming via a library that's actively maintained, write the re-connection protocol yourself if you have to, and persist the data into a database that handles JSON natively. I'm about to embark on a project of this nature and will be writing the collector in Perl, doing my own re-connect logic and persisting into either PostgreSQL or MongoDB. Most likely it will be MongoDB; PostgreSQL doesn't get native JSON till 9.2.
Late to the game, I know, but you'll want to use the "streamR" package for access to Twitter's streaming API.