Tell Haddock to use a different base URL for hyperlinked identifiers from other packages - haddock

I'd like to host my haddock docs on my own site, but when I generate them on my machine they contain file:// links to the values and types from base:
Bool
Is there a way to tell haddock to use a different base url for links to identifiers from outside my package?

Related

Bing Custom Search - sort/order by file modification date

I have been asked by a client to see if Bing Custom Search can order results containing links to PDFs by file modification date.
I know results can be ordered by the date the content was indexed (or re-indexed), but they are concerned with the actual age of PDF files as determined by the filesystem timestamp, and want to order the results by that criteria.
I could not find anything in the Azure documentation, and personally I don't believe it is possible, but I wanted to check in with SO first.
I don't believe this specific scenario is currently supported at the time. But if this is a feature you would like to see supported in the future, you may leave your feedback on Uservoice.

download data from website when no URL specified

I am trying to pull related to trade (imports and exports) from different central banks or statistical offices' websites in RStudio.
This is not a problem when an URL link is associated to a file (.pdf, .csv, .xls, ...). However, I can't find a solution when the user has to manually specify manually the filters (e.g. years, months, sectors,...) and no URL link is associated with the query.
For example, I am trying to load the imports and exports of El Salvador at this url: http://www.bcr.gob.sv/bcrsite/?cdr=38
It appears that the data is not stored in the html code of the web page. I have tried web scraping, but the data cannot be found this way as the user has to first make a query and then click "Export the results".
How I can automatically load these datasets into RStudio?
Looks like you need to use http://www.bcr.gob.sv/bcrsite/downloadsxls.php?exportCDR=1&cdr=38&xls=1 to get the XLS file which you can then parse. Make sure they are ok with this service being used as an API.

Can I duplicate the Easy Digital Downloads plugin , give it another name and use it along with EDD as a different store?

Can I duplicate the Easy Digital Downloads plugin (WordPress), give it another name and use it along with EDD as a different store?
My client wants to add 3 services separately, she does not want to use them as separate categories but want them as 3 different post types. So will duplicating the plugin work?
Thanks in advance,
Charles

Software to scrape or crawl for website urls

I want to scrape/crawl (don't know which one is best translation) website urls. For example i want to get every urls from:
www.Site.com/posts.html which contains www.Site.com/2015-04-01/1
So I would type in software www.Site.com and set depth to 2 and required url text www.Site.com/2015-04-01/1
So.. Software should:
go to: www.Site.com/posts.html
Find matched urls: Lets say it find:
www.Site.com/2015-04-01/1/Working-Stuff.html
www.Site.com/2015-04-01/1/New-stuff.html
www.Site.com/2015-04-01/1/News.html
And now it goes to first matched url (a) and look for another urls which contains www.Site.com/2015-04-01/1.
So for example it would look like this:
Main site: `www.Site.com/posts.html`
1)www.Site.com/2015-04-01/1/Working-Stuff.html
1a) www.Site.com/2015-04-01/1/Break.htm
1b) www.Site.com/2015-04-01/1/How-to.htm
1c) www.Site.com/2015-04-01/1/Lets-say.htm
1d) www.Site.com/2015-04-01/1/Gamer-life.htm
2) www.Site.com/2015-04-01/1/New-stuff.html
2a) www.Site.com/2015-04-01/1/My-Story-about.htm
3) www.Site.com/2015-04-01/1/News.html
3a) www.Site.com/2015-04-01/1/Go-to-hell.htm
3b) www.Site.com/2015-04-01/1/Leave.htm
Of course I don't need that prefix grouping 1), 2), 2a) etc. I want to grab only urls.
I used:
A1 website scraper - but when I try to scrape from ......html it cuts .html part and does not giving me full url list :/
[edited my previous slightly simplistic answer]
Screen scraping is the process of removing data from a web page. The R package rvest is very good at screen scraping.
Web crawling is the process of traversing through a website moving from page to page. The R package rselenium is very good at mimicking user's movement from page to page, but only when you know the structure of the web site.
You sound like you want to do a crawl from page to page, starting from a head page and moving forward. I think that you could code this up using a combination of the rvest and rselenium packages. Between the two of these you can customise and take any particular unknown route.

input controls for easy filtering in kibana dashboard?

I have successfully used kibana (4.3.1) to make a dashboard with several visualizations. Great! Now I would like to add some input controls to allow filtering. I know that you can manually enter filters in the query bar, for example 'myCol:[low TO high]' but this is problematic for a couple of reasons. First, the syntax is a little too advanced for casual users (although I could use the metadata visualization to document the syntax). Second, the query bar goes away when exporting the dashboard via iframe.
I have tried using the metric visualization to display a min and max values. Unfortunately, the metric visualization is read only.
I have tried a bar chart to allow range filtering but my users will need to select very specific ranges that result in selection areas of only a few pixels. This is error prone and not precise enough.
Any other ideas on how to create input controls for easy filtering? I was hoping find some sort of dial that is tied to a column to allow users an easy way to apply filters.
Thanks,
Nathan
Check working with filters section in the documentation. Selecting a filter changes dashboard URL and includes snippet like (filters:!((meta:(disabled:!t,index:'myIndex_*',key:MyTermToFilter,negate:!f,value:'MyValueToFilter'). Once you have all the filters you want your users to change in the URL, after they navigate to that URL they will be able to enable/disable them in the UI

Resources