Some one please help me how to post a new discussion in linkedin group using PHP.
I would appreciated if some one comes with an example.
Thanks for all replies.
Cute programmer :)
You can access the Groups API using PHP via the latest version of the Simple-LinkedIn library here:
http://code.google.com/p/simple-linkedinphp/
The release notes, covering the additions of the Groups-specific methods. TO answer your question using the library, you'd do something along the lines of the following:
$response = $OBJ_linkedin->createPost(<groupid>, <title>, <summary>);
if($response['success'] === TRUE) {
// success
} else {
// failure
}
Short answer, you can't.
Long answer, even after 2 years of promising Linked-in still have not produced a suitable API for groups management, despite myself (I'm an LI group manager) and many others who own and/or manage groups on LI repeatedly asking.
now... to look at it from the other point of view:
You don't really need an API to post, after all it is just a html we server, however even with LI you can't do anything without a user login, and that means oauth code to log you in, creation of account, getting a login token and then providing that and a ton more information, as well as the semantics of the discussion.
In short it's not going to be a simple post, even with groups that are open, and for such a simple task it's going to require you a lot of work.
However, if your adamant, then I would start by installing tools like fiddler & wire-shark, then analysing a manual session on LI and observing the process of logging in, creating posts etc ... end to end, so you understand what's sent where. Once you've done that, it's then just a question of reproducing that in PHP
If your wanting this to write an automated spamming tool by the way, I really wouldn't bother, because the second it gets seen, it will get shut-down and prevented from being used by LI management.
UPDATE:
Looking at the links provided by the OP it appears there is a groups API now, and I have to say it's something that LI remain very quiet about when asked by group owners (Hence the large amount of screen scraping I've done before now)
Moving on, and looking at the sample link you provided:
http://api.linkedin.com/v1/groups/12345/posts:(title,summary,creator)?order=recency
I don't know the API yet (Some investigation is required) but, one thing that sticks out is it looks like you
A) Need an account
B) Need to an API key (Presumably so LI can track your usage)
C) Need to have performed some kind of OAuth authentication and logged in before you can use it.
As things stand, I would recommend that you do what I'm about to and read through all the docs. :-)
We've both learned something new here.
Related
I want to prevent or hamper the parsing of the classifieds website that I'm improving.
The website uses API with JSON responses. As a solution, I want to add useless data between my data as programmers will probably parse by ID. And not give a clue about it in neither JSON response body nor header; so they won't be able to distinguish it without close inspection.
To prevent users from seeing it, I won't give that "useless data" to my users if they don't request it explicitly by ID. From an SEO perspective, I know that Google won't parse the page with useless data if there isn't any internal or external link.
How reliable would that technic be? And what problems/disadvantages/drawbacks do you think can occur in terms of user experience or SEO? Any ideas or suggestions will be very much appreciated.
P.S. I'm also limiting big request counts made in a short time. But it doesn't help. That's why I'm thinking of this technic.
I think banning parsers won't be better because they can change IP and etc.
Maybe I can get a better solution by requiring a login to access more than 50 item details for example (and that will work for Selenium, maybe?). Registering will make it harder. Even they do it, I can see these users and slow their response times and etc.
I'm new to the world of coding and Web dev and for my first real project, I've started building a quiz web app using Meteor.
Long story short, the app basically displays a random question to the user, then takes in their answer and gets feedback (it's a bit more complicated than that, but for the purposes of this question, that's the main functionality).
I've managed to get it working, but pretty much everything (apart from account creation and that kind of stuff) is done on the client-side (such as getting the random qn) - which I'd imagine is not very secure..
I'd like to move most of the calculations and operations on the server, but I don't want to publish any of the Questions collection to the client, since that means the client can essentially change it and/or view the correct answer.
So, my question is, would it be considered 'bad practice' if I don't publish anything to the client (except their user document) and basically do everything through Meteor methods (called on the client, and executed server-side)?
I've already tried implementing it and so far everything's working fine, but was just wondering whether it's good practice. Would it hurt performance in any way?
I've searched online for a while, but couldn't really find a definitive answer, hence my post here... TIA
The below example pulled right from the documentation showing how to omit fields.
// Server: Publish the `Rooms` collection, minus secret info...
Meteor.publish('rooms', function () {
return Rooms.find({}, {
fields: { secretInfo: 0 }
});
});
I've been messing around with creating my own implementation of an AspNet.Security.OAuthProviders by copying the GitHub example. Have a few questions..
First, I successfully authenticate but when I get back my User.Identity.Name is empty. I don't see that information coming back from my provider. Noob question I imagine, but do I have to explicitly request the information I want back? If so, how do I know what to ask for.. I'm kind of working blindly.
Second, in the GitHub example of the Handler, CreateTicketAsync immediately makes a call to the UserInformationEndpoint. In my use case, after getting authorized I want to go to a page that has some links to some api requests that will use the acquired authorization, rather than do it right away. I'm not sure if there is an example for that or I'm making incorrect assumptions and going about this the wrong way.
This is entirely supposed to be for demo purposes as a "how to" for other developers so I want to make sure I do things the correct way.
Is there a way to determine if the request coming to a handler (lets assume the handler responds to get and post) is being performed by a real browser versus a programmatic client?
I already know that it is easy to spoof things like the User Agent and the Referrer, but are there other headers that are more difficult to spoof? Maybe headers that are not commonly available in classes like .net's HttpWebRequest?
The other path that I looked at is maybe using the Encrypted View State to send a value to the browser that gets validated on the server side, though couldn't that value simply be scraped from the previous response and added as a post parameter to the next request?
Any help would be much appreciated,
Cheers,
There is no easy way to differentiate because in the end, a post programitically looks the same to the server as a post by a user from the browser.
As mentioned, captcha's can be used to control posting but are not perfect (as it is very hard but not impossible for a computer to solve them). They also can annoy users.
Another route is only allowing authenticated users to post, but this can also still be done programatically.
If you want to get a good feel for how people are going to try to abuse your site, then you may want to look at http://seleniumhq.org/
This is very similar to the famous Halting Problem in computer science. See some more on the proof, and Alan Turing here: http://webcache.googleusercontent.com/search?q=cache:HZ7CMq6XAGwJ:www-inst.eecs.berkeley.edu/~cs70/fa06/lectures/computability/lec30.ps+alan+turing+infinite+loop+compiler&cd=1&hl=en&ct=clnk&gl=us
The most common way is using captcha's. Of course captcha's have their own issues (users don't really care for them) but they do make it much more difficult to programatically post data. Doesn't really help with GETs though you can force them to solve a captcha before delivering content.
Many ways to do this, like dynamically generated XHR requests that can only be made with human tasks.
Here's a great article on NP-Hard problems. I can see a huge possibility here:
http://www.i-programmer.info/news/112-theory/3896-classic-nintendo-games-are-np-hard.html
One way: You could use some tricky JS to handle tokens on click. So your server issues token-id's to elements on the page during the backend render phase. Log these in a database or data file. Then, when users click around and submit, you can compare the id's sent via the onclick() function. There's plenty of ways around this, but you could apply some heuristics to determine if posts are too fast to be a human or not, that is, even if they scripted the hijacking of the token-ids and auto submitted, you could check that the time between click events appears automated. Signed up for a twitter account lately? They use passive human detection that while not 100% foolproof, it is slower and more difficult to break. Many if not all of the spam accounts there had to be human opened.
Another Way: http://areyouahuman.com/
As long as you are using encrypted methods verifying humanity without crappy CAPTCHA is possible.I mean, don't ignore your headers either. These are complimentary ways.
The key is to have enough complexity to make for an NP-Complete problem in terms of number of ways to solve the total number of problems is extraordinary. http://en.wikipedia.org/wiki/NP-complete
When the day comes when AI can solve multiple complex Human problems on their own, we will have other things to worry about than request tampering.
http://louisville.academia.edu/RomanYampolskiy/Papers/1467394/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
Another company doing interesting research is http://www.vouchsafe.com/play-games they actually use games designed to trick the RTT into training the RTT how to be more solvable by only humans!
Often when I post a comment or answer on a site I like to keep an eye out for additional responses from other people, possibly replying again if appropriate. Sometimes I'll bookmark a page for a while, other times I'll end up re-googling keywords to locate the post again. I've always thought there should be something better than my memory for keeping track of pages I care about for a few days to a week.
Does anyone any clever ideas for this type of thing? Is there a micro-delicious type of online app with a bookmarklet for very short term followup?
Update I think I should clarify. I wasn't asking about Stack Overflow specifically - on the "read/write web" in general I add comments to blog posts, respond to google group threads, etc. It's that sort of mish-mash of individual pages on random sites that I would care to keep track of for seven-to-ten days.
For stackoverflow, I put together a little bookmarklet thing at http://stackoverflow.hewgill.com. I use it to keep track of posts that I might want to come back to later, for reference or to answer if nobody else did, or whatever. The backend automatically retrieves updates from the SO server and updates your list of bookmarklets.
In my head mostly. I occasionally forget things, but it works well enough.
That's a very interesting question you asked here.
I do th efollowing:
temp bookmarks in browser
just a tab in Firefox left opened for weeks :)
subscription to email\rss when possible. When email notification comes I often put it into special folder in my email tree.
Different logins, notification types etc are complicating following info in the web :(
Other interesting questions:
how to organize information storage (notes, saved web pages, forum threads etc) for current usage and as a read-only library, sync it between different PCs and USB disks, how to label (tag) it and search it
how to store old mails, conversations, chats,..?
store digital photos for future: make hard-copy printouts or just regulary rewrite it from CD to a new one
Click on your username, then Responses.