Is there a way to play sound on mobile when task finished - r

Maybe I didn't search to good, but I wonder is there a way to play a sound on my Apple mobile device when the task is finished, for example call to apply?
Best Regards

(This is one of many possible answers, and happens to work very well for me.)
I use Pushbullet and RPushbullet. After the initial setup (free account and free use), from any R instance (that has connectivity with the internet) I can run pbNote('note', 'title', 'body of note'), and it "instantly" comes up on my computer and mobile.
Because it is an R package/function, it can be easily scripted to meet whatever static/dynamic needs may arise. It can also send images (I'm told), files, addresses (think google maps), and lists.

I'm using the twitteR package and tweet something when a long-lasting task is done. You can then setup a second twitter account to follow the account you tweet to from R and set an alarm for new tweets.
To be able to tweet from R, you have to go through all the authentication steps for Twitter, though.

I use my own github package to send a text. This is wrapping python code I didn't write and don't understand so I maintain it for myself but have not been able to address other people's problems:
https://github.com/trinker/gmailR
So the use may look something like:
gmail(to=cell2email(5555555555, "sprint"), password = "password")
Including this at the end of the script sends me a text when the long task is complete. This really is taking advantage that cell numbers can be turned into email addresses if the cell carrier is known.

Related

Use Julia to perform computations on a webpage

I was wondering if it is possible to use Julia to perform computations on a webpage in an automated way.
For example suppose we have a 3x3 html form in which we input some numbers. These form a square matrix A, and we can find its eigenvalues in Julia pretty straightforward. I would like to use Julia to make the computation and then return the results.
In my understanding (which is limited in this direction) I guess the process should be something like:
collect the data entered in the form
send the data to a machine which has Julia installed
run the Julia code with the given data and store the result
send the result back to the webpage and show it.
Do you think something like this is possible? (I've seen some stuff using HttpServer which allows computation with the browser, but I'm not sure this is the right thing to use) If yes, which are the things which I need to look into? Do you have any examples of such implementations of web calculations?
If you are using or can use Node.js, you can use node-julia. It has some limitations, but should work fine for this.
Coincidentally, I was already mostly done with putting together an example that does this. A rough mockup is available here, which uses express to serve the pages and plotly to display results (among other node modules).
Another option would be to write the server itself in Julia using Mux.jl and skip server-side javascript entirely.
Yes, it can be done with HttpServer.jl
It's pretty simple - you make a small script that starts your HttpServer, which now listens to the designated port. Part of configuring the web server is that you define some handlers (functions) that are invoked when certain events take place in your app's life cycle (new request, error, etc).
Here's a very simple official example:
https://github.com/JuliaWeb/HttpServer.jl/blob/master/examples/fibonacci.jl
However, things can get complex fast:
you already need to perform 2 actions:
a. render your HTML page where you take the user input (by default)
b. render the response page as a consequence of receiving a POST request
you'll need to extract the data payload coming through the form. Data sent via GET is easy to reach, data sent via POST not so much.
if you expose this to users you need to setup some failsafe measures to respawn your server script - otherwise it might just crash and exit.
if you open your script to the world you must make sure that it's not vulnerable to attacks - you don't want to empower a hacker to execute random Julia code on your server or access your DB.
So for basic usage on a small case, yes, HttpServer.jl should be enough.
If however you expect a bigger project, you can give Genie a try (https://github.com/essenciary/Genie.jl). It's still work in progress but it handles most of the low level work allowing developers to focus on the specific app logic, rather than on the transport layer (Genie's author here, btw).
If you get stuck there's GitHub issues and a Gitter channel.
Try Escher.jl.
This enables you to build up the web page in Julia.

How to deal with heavy traffic due to a continuous image stream?

I've got this app idea, and it involves a continuous stream of images sent by users. Every user is shown the current image, and as soon as a user sends in a new image, everyone should see the new image where the old used to be. Images are sent in kind of like snapchat, or even faster and simpler than that. Now imagine 1000+ or even 50.000+ people doing this at the same time! Something like 1 new picture every SECOND, that has to be pushed to 50.000+ devices!
How on earth could I manage such traffic? It seems kind of impossible, and this is a very vague question, but I thought I would ask here before scrapping the idea or settling for a compromise.
check out Build realtime Applications
this should help you.

LinkedIn group API auto post via PHP

Some one please help me how to post a new discussion in linkedin group using PHP.
I would appreciated if some one comes with an example.
Thanks for all replies.
Cute programmer :)
You can access the Groups API using PHP via the latest version of the Simple-LinkedIn library here:
http://code.google.com/p/simple-linkedinphp/
The release notes, covering the additions of the Groups-specific methods. TO answer your question using the library, you'd do something along the lines of the following:
$response = $OBJ_linkedin->createPost(<groupid>, <title>, <summary>);
if($response['success'] === TRUE) {
// success
} else {
// failure
}
Short answer, you can't.
Long answer, even after 2 years of promising Linked-in still have not produced a suitable API for groups management, despite myself (I'm an LI group manager) and many others who own and/or manage groups on LI repeatedly asking.
now... to look at it from the other point of view:
You don't really need an API to post, after all it is just a html we server, however even with LI you can't do anything without a user login, and that means oauth code to log you in, creation of account, getting a login token and then providing that and a ton more information, as well as the semantics of the discussion.
In short it's not going to be a simple post, even with groups that are open, and for such a simple task it's going to require you a lot of work.
However, if your adamant, then I would start by installing tools like fiddler & wire-shark, then analysing a manual session on LI and observing the process of logging in, creating posts etc ... end to end, so you understand what's sent where. Once you've done that, it's then just a question of reproducing that in PHP
If your wanting this to write an automated spamming tool by the way, I really wouldn't bother, because the second it gets seen, it will get shut-down and prevented from being used by LI management.
UPDATE:
Looking at the links provided by the OP it appears there is a groups API now, and I have to say it's something that LI remain very quiet about when asked by group owners (Hence the large amount of screen scraping I've done before now)
Moving on, and looking at the sample link you provided:
http://api.linkedin.com/v1/groups/12345/posts:(title,summary,creator)?order=recency
I don't know the API yet (Some investigation is required) but, one thing that sticks out is it looks like you
A) Need an account
B) Need to an API key (Presumably so LI can track your usage)
C) Need to have performed some kind of OAuth authentication and logged in before you can use it.
As things stand, I would recommend that you do what I'm about to and read through all the docs. :-)
We've both learned something new here.

To Develop LMS and Scorm Sequesncing Engine

We want a LMS(coded in ASP.NET/vb.net) which is able to import SCORM packages & display it to learner for viewing content. I am totally new to SCORM and have been shifted to this project. I want to know how can I access SCORM Assessment object's (Test) result, like Learner ID, passed/fail, time.
Can you please guide me what will I need to implement in ASP.NET code to accomplish my goal ?
Task that I have done so far is,
Reading a manifest zip file, unzipping the file and get all information from the file(content name,description,items and launching page) and when user clicks on a particular course a pop up window is launching the page.
I eagerly want to know what I can do next to communicate with the LMS with the APIs. Shall I need to develop my own LMS to get the result,If there is a quiz which is running, all I need to know is the no of questions attempted by the user, whether the user is pass or fail and I need to store all information in the database for individual user so that I can review the result afterwards.
So the task remaining.
Tracking mechanism to deliver the content.
SCORM/LMS sequencing engine that controls the navigation between parts of SCORM conformant course.
Please help.
SLK at codeplex provides a good starting point. However, if you are truly wanting to provide an in-house written SCORM play that is fully compliant, you have a major task ahead of you. In essence there are three party you need to fully develop:
CAM - the unzipping process, which it sounds like you have already achieved.
RTE - the javascript host for SCORM, providing the 8 specified methods. Behind this you also need to implement the SCORM object model, which SLC does help with. If you have implemented all of this, then there should be data entries on the data model that indicate completion etc.
SN - the sequencing and navigation processing. This is significantly the most complex part. I am still in the process of trying to implement this, using SLC, and it is hard. It is the completion of this that will potentially give you more information that will enable you to know what has been done.
it is also worth looking at scorm.com, who are a consultancy, but provide a lot of useful information about the scorm standard.
That is true. SCORM is one of these stadarts where you can implement as little as possible. But you will need some of Javascript with a Backend-Script (JSON to the rescue) so you can track the scorm data, and save it your database.
But let me tell you this: This is the easiest task! Making your own course-creator is a whole other beast.

How to scrape websites such as Hype Machine?

I'm curious about website scraping (i.e. how it's done etc..), specifically that I'd like to write a script to perform the task for the site Hype Machine.
I'm actually a Software Engineering Undergraduate (4th year) however we don't really cover any web programming so my understanding of Javascript/RESTFul API/All things Web are pretty limited as we're mainly focused around theory and client side applications.
Any help or directions greatly appreciated.
The first thing to look for is whether the site already offers some sort of structured data, or if you need to parse through the HTML yourself. Looks like there is an RSS feed of latest songs. If that's what you're looking for, it would be good to start there.
You can use a scripting language to download the feed and parse it. I use python, but you could pick a different scripting language if you like. Here's some docs on how you might download a url in python and parse XML in python.
Another thing to be conscious of when you write a program that downloads a site or RSS feed is how often your scraping script runs. If you have it run constantly so that you'll get the new data the second it becomes available, you'll put a lot of load on the site, and there's a good chance they'll block you. Try not to run your script more often than you need to.
You may want to check the following books:
"Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL"
http://www.amazon.com/Webbots-Spiders-Screen-Scrapers-Developing/dp/1593271204
"HTTP Programming Recipes for C# Bots"
http://www.amazon.com/HTTP-Programming-Recipes-C-Bots/dp/0977320677
"HTTP Programming Recipes for Java Bots"
http://www.amazon.com/HTTP-Programming-Recipes-Java-Bots/dp/0977320669
I believe that the most important thing you must analyze is which kind of information do you want to extract. If you want to extract entire websites like google does probably your best option is to analyze tools like nutch from Apache.org or flaptor solution http://ww.hounder.org If you need to extract particular areas on unstructured data documents - websites, docs, pdf - probably you can extend nutch plugins to fit particular needs. nutch.apache.org
On the other hand if you need to extract particular text or clipping areas of a website where you set rules using DOM of the page probably what you need to check is more related to tools like mozenda.com. with those tools you will be able to set up extraction rules in order to scrap particular information on a website. You must take into consideration that any change on a webpage will give you an error on your robot.
Finally, If you are planning to develop a website using information sources you could purchase information from companies such as spinn3r.com were they sell particular niches of information ready to be consume. You will be able to save lots of money on infrastructure.
hope it helps!.
sebastian.
Python has the feedparser module, located at feedparser.org that actually handles RSS in its various flavours and ATOM in its various flavours. No reason to reinvent the wheel.

Resources