I would like to download tweets based on certain search terms. I'm aware of HTTP GET and such techniques, but I'm not sure the best way to create a simple executable that downloads the tweets and saves them for subsequent analysis.
Any ideas? I'm a basic programmer - if you say "use curl" I know roughly what you mean but not how to set up an application to run curl commands!
Hence my dilemna.
Thanks in advance!
You absolutely can do it in c# or any other language.
From a very rudimentary standpoint, the Twitter API wiki will tell you how, but I know that's not what you're really asking.
I would suggest getting familiar with a good API such as Tweetsharp which also has methods not only for getting your typical timelines, but also using search. The advantage to this (aside from not having to handle your own serialization, etc.) is that it unifies the timeline and search calls as they are actually slightly different API's.
The downside to this approach though is that you're not going to be able to directly translate it to a mac, unless you write it using Silverlight.
the upside to this approach is that Tweetsharp gives you a number of options on how it gives you the data, which in turn gives you a number of options as to how to save the data.
Related
When using the translation API, I get a different translation (and worse) than if I use translate.google.com.
I am working on a project for a client, and the client was dissatisfied with the translation and noticed the difference.
Do these two service use different engines? I read that the API uses nmt-mode now, and that translate.google.com already uses the same engine.
Both set to translate from Norwegian to English.
Any more information that can clear this up?
Thanks!
The result differences between the translate.google.com and the Translation API calls are considered as an expected behavior that can be generated due to maintenance tasks and the logic used by the internal processes; However, the engines used for each service seems to be private information.
Based on this, it is normal to get some variances when using the API. I think you can use the model parameter option as an available workaround in case you want to specify which of the available models to use, as well as take a look on the Specifying a model official documentation to get detail information about this alternative.
It's almost about 3 years later and the problem still remains!
So I was trying to translate a dataset with the Google Translate API, but in the end it failed to translate some texts to the target language (in my case, Persian/Farsi). So I decided to check them to see if there's a pattern and maybe translate them using the web version of Google Translate.
As I was doing so, I figured that the web version actually could translate some of those untranslated texts, BUT not all. When trying to find a reason for such behaviour, I found out that most of them were names and not sentences. But as we know, names can easily be written with the target language characters as the translation. But why the API doesn't transform those names while the web version does? This photo will explain everything perhaps:
verified translation
As can be seen, some translations have a badge indicating that the translation has been verified, while some others don't.
So to recap, my guess is that maybe the API is set to only use verified translations, but as for the web version, even unverified translations are allowed since you can edit or report them.
I was wondering if it is possible to use Julia to perform computations on a webpage in an automated way.
For example suppose we have a 3x3 html form in which we input some numbers. These form a square matrix A, and we can find its eigenvalues in Julia pretty straightforward. I would like to use Julia to make the computation and then return the results.
In my understanding (which is limited in this direction) I guess the process should be something like:
collect the data entered in the form
send the data to a machine which has Julia installed
run the Julia code with the given data and store the result
send the result back to the webpage and show it.
Do you think something like this is possible? (I've seen some stuff using HttpServer which allows computation with the browser, but I'm not sure this is the right thing to use) If yes, which are the things which I need to look into? Do you have any examples of such implementations of web calculations?
If you are using or can use Node.js, you can use node-julia. It has some limitations, but should work fine for this.
Coincidentally, I was already mostly done with putting together an example that does this. A rough mockup is available here, which uses express to serve the pages and plotly to display results (among other node modules).
Another option would be to write the server itself in Julia using Mux.jl and skip server-side javascript entirely.
Yes, it can be done with HttpServer.jl
It's pretty simple - you make a small script that starts your HttpServer, which now listens to the designated port. Part of configuring the web server is that you define some handlers (functions) that are invoked when certain events take place in your app's life cycle (new request, error, etc).
Here's a very simple official example:
https://github.com/JuliaWeb/HttpServer.jl/blob/master/examples/fibonacci.jl
However, things can get complex fast:
you already need to perform 2 actions:
a. render your HTML page where you take the user input (by default)
b. render the response page as a consequence of receiving a POST request
you'll need to extract the data payload coming through the form. Data sent via GET is easy to reach, data sent via POST not so much.
if you expose this to users you need to setup some failsafe measures to respawn your server script - otherwise it might just crash and exit.
if you open your script to the world you must make sure that it's not vulnerable to attacks - you don't want to empower a hacker to execute random Julia code on your server or access your DB.
So for basic usage on a small case, yes, HttpServer.jl should be enough.
If however you expect a bigger project, you can give Genie a try (https://github.com/essenciary/Genie.jl). It's still work in progress but it handles most of the low level work allowing developers to focus on the specific app logic, rather than on the transport layer (Genie's author here, btw).
If you get stuck there's GitHub issues and a Gitter channel.
Try Escher.jl.
This enables you to build up the web page in Julia.
I would like to use the twitteR package for R to get a the list of followers IDs of a user with more than 150k followers. Calling naïvely the getFollowersIDs() method of user-class breaks because it overloads the api request limit (Client Error (429)). Is there a way of circumventing this?
You can directly take the number of followers by "followersCount" field of the user-class. I'm not sure. Just try.
I am not sure how to do it if your bound to using R. I absolutely love the twitteR package, but recently, I have been shifting some of my code to python through the use of modules like tweepy. By no means do I intend to suggest that one language is better than the other, but when going beyond simple tasks, I have found python to be a tad more robust.
I'm curious about website scraping (i.e. how it's done etc..), specifically that I'd like to write a script to perform the task for the site Hype Machine.
I'm actually a Software Engineering Undergraduate (4th year) however we don't really cover any web programming so my understanding of Javascript/RESTFul API/All things Web are pretty limited as we're mainly focused around theory and client side applications.
Any help or directions greatly appreciated.
The first thing to look for is whether the site already offers some sort of structured data, or if you need to parse through the HTML yourself. Looks like there is an RSS feed of latest songs. If that's what you're looking for, it would be good to start there.
You can use a scripting language to download the feed and parse it. I use python, but you could pick a different scripting language if you like. Here's some docs on how you might download a url in python and parse XML in python.
Another thing to be conscious of when you write a program that downloads a site or RSS feed is how often your scraping script runs. If you have it run constantly so that you'll get the new data the second it becomes available, you'll put a lot of load on the site, and there's a good chance they'll block you. Try not to run your script more often than you need to.
You may want to check the following books:
"Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL"
http://www.amazon.com/Webbots-Spiders-Screen-Scrapers-Developing/dp/1593271204
"HTTP Programming Recipes for C# Bots"
http://www.amazon.com/HTTP-Programming-Recipes-C-Bots/dp/0977320677
"HTTP Programming Recipes for Java Bots"
http://www.amazon.com/HTTP-Programming-Recipes-Java-Bots/dp/0977320669
I believe that the most important thing you must analyze is which kind of information do you want to extract. If you want to extract entire websites like google does probably your best option is to analyze tools like nutch from Apache.org or flaptor solution http://ww.hounder.org If you need to extract particular areas on unstructured data documents - websites, docs, pdf - probably you can extend nutch plugins to fit particular needs. nutch.apache.org
On the other hand if you need to extract particular text or clipping areas of a website where you set rules using DOM of the page probably what you need to check is more related to tools like mozenda.com. with those tools you will be able to set up extraction rules in order to scrap particular information on a website. You must take into consideration that any change on a webpage will give you an error on your robot.
Finally, If you are planning to develop a website using information sources you could purchase information from companies such as spinn3r.com were they sell particular niches of information ready to be consume. You will be able to save lots of money on infrastructure.
hope it helps!.
sebastian.
Python has the feedparser module, located at feedparser.org that actually handles RSS in its various flavours and ATOM in its various flavours. No reason to reinvent the wheel.
A client wants to "Web-enable" a spreadsheet calculation -- the user to specify the values of certain cells, then show them the resulting values in other cells.
(They do NOT want to show the user a "spreadsheet-like" interface. This is not a UI question.)
They have a huge spreadsheet with lots of calculations over many, many sheets. But, in the end, only two things matter -- (1) you put numbers in a couple cells on one sheet, and (2) you get corresponding numbers off a couple cells in another sheet. The rest of it is a black box.
I want to present a UI to the user to enter the numbers they want, then I'd like to programatically open the Excel file, set the numbers, tell it to re-calc, and read the result out.
Is this possible/advisable? Is there a commercial component that makes this easier? Are their pitfalls I'm not considering?
(I know I can use Office Automation to do this, but I know it's not recommended to do that server-side, since it tries to run in the context of a user, etc.)
A lot of people are saying I need to recreate the formulas in code. However, this would be staggeringly complex.
It is possible, but not advisable (and officially unsupported).
You can interact with Excel through COM or the .NET Primary Interop Assemblies, but this is meant to be a client-side process.
On the server side, no display or desktop is available and any unexpected dialog boxes (for example) will make your web app hang – your app will behave flaky.
Also, attaching an Excel process to each request isn't exactly a low-resource approach.
Working out the black box and re-implementing it in a proper programming language is clearly the better (as in "more reliable and faster") option.
Related reading: KB257757: Considerations for server-side Automation of Office
You definitely don't want to be using interop on the server side, it's bad enough using it as a kludge on the client side.
I can see two options:
Figure out the spreadsheet logic. This may benefit you in the long term by making the business logic a known quantity, and in the short term you may find that there are actually bugs in the spreadsheet (I have encountered tons of monster spreadsheets used for years that turn out to have simple bugs in them - everyone just assumed the answers must be right)
Evaluate SpreadSheetGear.NET, which is basically a replacement for interop that does it all without Excel (it replicates a huge chunk of Excel's non-visual logic and IO in .NET)
Although this is certainly possible using ASP.NET, it's very inadvisable. It's un-scalable and prone to concurrency errors.
Your best bet is to analyze the spreadsheet calculations and duplicate them. Now, granted, your business is not going to like the time it takes to do this, but it will (presumably) give them a more usable system.
Alternatively, you can simply serve up the spreadsheet to users from your website, in which case you do almost nothing.
Edit: If your stakeholders really insist on using Excel server-side, I suggest you take a good hard look at Excel Services as #John Saunders suggests. It may not get you everything you want, but it'll get you quite a bit, and should solve some of the issues you'll end up with trying to do it server-side with ASP.NET.
That's not to say that it's a panacea; your mileage will certainly vary. And Sharepoint isn't exactly cheap to buy or maintain. In fact, short-term costs could easily be dwarfed by long-term costs if you go the Sharepoint route--but it might the best option to fit a requirement.
I still suggest you push back in favor of coding all of your logic in a separate .NET module. That way you can use it both server-side and client-side. Excel can easily pass calculations to a COM object, and you can very easily publish your .NET library as COM objects. In the end, you'd have a much more maintainable and usable architecture.
Neglecting the discussion whether it makes sense to manipulate an excel sheet on the server-side, one way to perform this would probably look like adopting the
Microsoft.Office.Interop.Excel.dll
Using this library, you can tell Excel to open a Spreadsheet, change and read the contents from .NET. I have used the library in a WinForm application, and I guess that it can also be used from ASP.NET.
Still, consider the concurrency problems already mentioned... However, if the sheet is accessed unfrequently, why not...
The simplest way to do this might be to:
Upload the Excel workbook to Google Docs -- this is very clean, in my experience
Use the Google Spreadsheets Data API to update the data and return the numbers.
Here's a link to get you started on this, if you want to go that direction:
http://code.google.com/apis/spreadsheets/overview.html
Let me be more adamant than others have been: do not use Excel server-side. It is intended to be used as a desktop application, meaning it is not intended to be used from random different threads, possibly multiple threads at a time. You're better off writing your own spreadsheet than trying to use Excel (or any other Office desktop product) form a server.
This is one of the reasons that Excel Services exists. A quick search on MSDN turned up this link: http://blogs.msdn.com/excel/archive/category/11361.aspx. That's a category list, so contains a list of blog posts on the subject. See also Microsoft.Office.Excel.Server.WebServices Namespace.
It sounds like you're talking that the user has the spreadsheet open on their local system, and you want a web site to manipulate that local spreadsheet?
If that's the case, you can't really do that. Even Office automation won't help, unless you want to require them to upload the sheet to the server and download a new altered version.
What you can do is create a web service to do the calculations and add some vba or vsto code to the Excel sheet to talk to that service.