unique download link created after button link on website (not sure what language to search on internet) - button

I would like to learn what language the following is done in
:
user comes to site and fills out mailing list form (for example)
when user submits the form, a unique download link is generated for the file( for example: www.myDomain.com/downloads/myFile.zip)
Bonus: have that link expire after 24 hours or however long seems necessary (I actually believe that is done with PHP, which I have been playing with, but am not even a novice yet)
I am not looking for anyone to give me the answer, but maybe point me in the right direction as to where to learn. I have googled many different variations of "unique download link on button click".
My level of knowledge is XHTML/CSS, and I have played with JavaScript and PHP, but as I said, I am not even a novice. I am looking more for what language this is done in so I can work towards it.

You can do this in pretty much any server side language including PHP. How it works is the user submits the HTML form and a PHP script will process it to see if it's filled in correctly and matches all the criteria (emails are actual emails, names aren't blank etc.). Then it would use a database backend to insert the details of the form so you have it for your records.
For the download link, once you enter the form details, it would generate a token link just for that user which has a timestamp, both (Token and Timestamp) of which will be stored in your database. This link point to your PHP script which will take in the token (URL GET Variable) and check if the token is valid and if the file timestamp is less than 24 hours. If both these conditions are true, it'll serve the file to the user for download otherwise it'll show an error.
PHP and MySQL would be a good enough tool and should be fairly easy to get started with the documentation around.
Steps:
Look into how to capture form input in PHP (validations and security too)
Storing the input in a Database (MySQL for example)
Generating a token link and store in Database with timestamp and serve to user
When token link is accessed, checking if token is valid and timestamp is not more than 24 hours

Related

How to expire Branch.io link within specific time? (Deep linking via branch metrics)

I am using link to generate deep linking. I am using their public API's endpoint to generate links.
Here is their endpoint: https://api.branch.io/v1/url
I append my branch key and data that I need to associate in this link. Everything is working fine but I need to expire this link within one hour.
Reading up here: https://github.com/BranchMetrics/branch-deep-linking-public-api#creating-a-deep-linking-url
I added "duration" key also, but it didnt expire the link.
It will be great if anyone could help me in figuring out how to expire branch.io link.
Alex from Branch.io here: the duration parameter is used for something different, so it's not going to be able to do what you want. We don't have a built-in feature to expire links, but you could create something close to it yourself:
Add a custom link parameter containing a timestamp for when the link was created.
Check for that timestamp when handling the link at the destination, and do something different if it is more than an hour old. I'm guessing this would be inside your app, and also on whatever fallback URL you have specified for when the app isn't installed or the user is on desktop.
Mail from branch.io support team suggested this answer as below:
If you found out about the $exp_date parameter from here then the
parameters in that list are only used for iOS Spotlight Indexing but
will be used by Branch in the future. A better solution than
utilizing $exp_date is to code logic into your client to determine
what to do with link data based on date. This way, your deep links
will always work and always carry data through, and you won't have to
worry about users clicking empty links.
This way, you would include date as an extra meta key/value pair, and
examine this date in your client when receiving link params to
determine if you want to honor the link's contents or not.

Drupal 7 Content Deadline

I am using Drupal 7 and a custom CCK content in order to allow users to submit information to our website. I'd like to be able to only allow submissions between a set of user definable dates. Once the dates expire, i'd like for the user to receive a message of some sort stating that the deadline for submissions is now expired when they click the link to open the form.
I currently manually go in and turn off permissions to the content type once the deadline expires, but that is clunky and requires a little too much management (I have 15 forms I need to do this for). I've searched stack overflow and google and have not come up with anything that fit my needs, most likely because I'm not using the right keywords.
Does anyone know an easy way to do this with a module or do I need to try to write my own in order to accomplish this goal? Thanks in advance for any help.
I think you have to write some custom module to achieve this. You would use hook_node_access() to control node creation page access and and put your error message and/or redirection.
https://api.drupal.org/api/drupal/modules%21node%21node.api.php/function/hook_node_access/7
Another solution is to use Webform module.
https://www.drupal.org/project/webform
Download version 7.x-4.x
Create a form and in the settings there is an option to control total submissions limit, and set time frames for limitations.
Hope this helps.
You can achieve this by:
Create a column in user table in mysql called "expDate", assign the expDate values(mm-dd-yyyy) to each user
In drupal, on the page where users summit message, write php code to grab the expDate from database, and compare it to Current Date, date('mm-dd-yyyy'). Just copy can paste the php codes in each page where you have the form.
You can also pass the expDate from php to js, then do some fancy job instead of simple alert.

How can I track visitors’ paths from one page to another with full URLs?

Say I have two pages on a site called “Page 1” and “Page 10”. I'd like to be able to see the paths visitors take to get from “Page 1” to “Page 10” with full URLs intact. Many of the URLs (including those for “Page 1” and “Page 10”) will include query strings that are important.
Is this possible? If so, how?
Try using behavior flow reports. The report basically shows you how visitors click through your website. There are a lot of ways to customize the report, with which you will need to play around to really answer your question. By default, the behavior flow focuses on entry and exit points of visitors, regardless how many times they hit the different subpages in between. However, I'm sure you can set appropriate filters and settings to answer your question.
I use two methods for tracking where people have been on my website:
Track and store the information in my own SQL database. (details below)
Lead Forensics (paid subscription, but you can do a trial).
For tracking and storing my own data, I record unique visitors based upon the IP Address they're connecting from and then have a separate table that records all page views that links back to the unique visitor table.
Lead Forensics data simply allows me to link up those unique visitors with actual companies that have viewed my website.
Doing it yourself means you don't have to rely on Google working for your records to work, and in my experience Google Analytics tends to round numbers so you don't get a true indication of numbers, and also you can remove bots and website trawlers from your data by tracking the user agent string.
As a somewhat ugly hack you could use transaction tracking. If you use the same transaction id multiple times subsequent products will be added to the existing data. So assign an ID at the start of the visits and on each page record a transaction with the current page url as product name (and the ID as transaction id). This will give you the complete path per user (I am frankly not to sure how this is useful - at some point you probably want aggregated data. Plus each transaction and product counts towards your quota for interaction counts, so on a large site you might run over the 10mio hits limit).
you can do it programatically
have a MAP in the backend which stores the userId (assuming u would have given a unique ID at the time of login to each user) with a list of Strings(each string being URL visited by that user)
whenever the user hits another URL from Page 1(and only from page1, check it using JS), send a POST request to backend with the new URL in its data section.
In the backend, check if the URL is of Page 10 and if not, add this URL as a string into the MAP for that corresponding user
Finally, when the user clicks on the Page 10 URL, you know the URLs in the way from Page 1 to Page 10 and so use them.
Though if I consider JS and I have not misunderstood your question, we can get the previous URL from request header information using document.referrer.
Are you trying to do it from 'Google Tag Manager'? I am not sure whether you are trying to trace the URLS in clientside or server side?

Parsing Web page with R

this is my first time posting here. I do not have much experience (less than a week) with html parsing/web scraping and have difficulties parsing this webpage:
https://www.jobsbank.gov.sg/
What I wan to do is to parse the content of all available job listing in the web.
my approach:
click search on an empty search bar which will return me all records listed. The resulting web page is: https://www.jobsbank.gov.sg/ICMSPortal/portlets/JobBankHandler/SearchResult.do
provide the search result web address to R and identify all the job listing links
supply the job listing links to R and ask R to go to each listing and extract the content.
look for next page and repeat step 2 and 3.
However, the problem is that the resulting webpage I got from step 1 does not direct me to the search result page. Instead, it will direct me back to the home page.
Is there anyway to overcome this problem?
Suppose I managed to get the web address for the search result, I intent to use the following code:
base_url <- "https://www.jobsbank.gov.sg/ICMSPortal/portlets/JobBankHandler/SearchResult.do"
base_html <- getURLContent(base_url,cainfo="cacert.pem")[[1]]
links <- strsplit(base_html,"a href=")[[1]]
Learn to use the web developer tools in your web browser (hint: Use Chrome or Firefox).
Learn about HTTP GET and HTTP POST requests.
Notice the search box sends a POST request.
See what the Form Data parameters are (they seem to be {actionForm.checkValidRequest}:YES
{actionForm.keyWord}:my search string )
Construct a POST request using one of the R http packages with that form data in.
Hope the server doesn't care about the cookies, if it does, get the cookies and feed it cookies.
Hence you end up using postForm from RCurl package:
p = postForm(url, .params=list(checkValidRequest="YES", keyword="finance")
And then just extract the table from p. Getting the next page involves constructing another form request with a bunch of different form parameters.
Basically, a web request is more than just a URL, there's all this other conversation going on between the browser and the server involving form parameters, cookies, sometimes there's AJAX requests going on internally to the web page updating parts.
There's a lot of "I can't scrape this site" questions on SO, and although we could spoonfeed you the precise answer to this exact problem, I do feel the world would be better served if we just told you to go learn about the HTTP protocol, and Forms, and Cookies, and then you'll understand how to use the tools better.
Note I've never seen a job site or a financial site that doesn't like you scraping its content - although I can't see a warning about it on this site, that doesn't mean it's not there and I would be careful about breaking the Terms and Conditions of Use. Otherwise you might find all your requests failing.

Shutterfly Order API .

I found this site
http://www.shutterfly.com/documentation/api_OrderImage.sfly
but there are no examples of actually walking through the whole process. Does anyone have any good documentation on using this API to take a local photo and allow someone to order a print via shutterfly?
I went through these steps:
Sign up for an account
Sign up as a developer
Create an application (I called mine Test). Note the generated Application Id and Shared Secret
The Shutterfly API page has a list of references for various Domain-specific APIs:
Address Book
Album Data
Folder Data
Go To Shutterfly UE
Image Upload
Interactive Sign-in
Image Request
Order
Pricing
Seamless Sign-in
User Data
User Authentication
Each uses RESTful principles. The documentation looks pretty comprehensive to me, if you need some background, here's links for RESTful APIs and ROME you may find useful
There is also an API Explorer section on the same page that allows you to test the methods via a form on their site. For example this form for CRUD operations on the album data.
Based on your comment, for your requirements, you would:
Use the Album GET to list albums, then get the data for a specific album.
Use the Image Get request to retrieve the image data, so your friend can verify the image(s) they want to purchase.
Authenticate the user
Use the Pricing POST request to get the estimated pricing for the image.
User the Order POST to submit the order over https
Update: Found a page describing using a Greasemonkey script which adds Shutterfly print ordering capability to Flickr. This might provide the basis for a solution.
For Reference:
The original link above is a middle step of the Shutterfly Open API ordering procedure.
The whole process goes through a series of steps allowing you to control much more than just pushing photos into somebody's album in Shutterfly.
With this process, your application can actually carry out the entire procedure of:
specifying the images and the sizes and quantities, or other products
calculating shipping, taxes, and totals
paying, and
launching the processing
It also includes the ability to see when the packages will be delivered and arrive.
Thus if you have a solid application for mapping your images onto paper and products, you can pretty much control the entire process.
Once the order is submitted, it will appear on the user's account at Shutterfly who the order was associated with.
Kudos to Shutterfly for making such a powerful tool! It would be great if other printing facilities had similar tools.

Resources