Embed graphics into .lyx files instead of linking an external file? - lyx

Intuitively I would have put this question on superuser.com, but searching on stackexchange.com showed, that stackoverflow.com has become the go-to platform for asking about LyX usage.
LyX normally shares the usual LaTeX approach of keeping files separate inside a folder, containing only a reference to the (relative or absolute) path of the file, and inserting them into the output document at compile time only.
While this is desirable in many scenarios, it can also create usability issues, e.g. when copy/pasting content from an old document to a new document. By default both documents will now be pointing to the same file; If they are not in the same directory, the path will likely be pasted as an absolute path.
This has two undesirable side-effects:
Files no longer used by any document are not cleaned up automatically, and
Updating the graphic for the new document will also change the old document, if it
is ever recompiled.
Manually copying files over before using them results in some friction in terms of usability.
Is there some way to instead embed/attach documents to the .lyx file itself, as would be the usual workflow with office suites?
Having such an option would also be better in terms of desktop integration, e.g. being able to open .lyx attachments directly from Emails without first unpacking them, or allowing more homogeneous "filing" of mixed-program documents, and more easily browsing across files in file managers, better integration with desktop search, ... e.g.
(Desired) (Current, with widespread
directories-first sorting)
Documents/ Documents/ Requires additional
|-- 2020-05-03 Work Report May.docx |-- 2020-05-28 Model Summary/ <-- navigation, cannot be
|-- 2020-05-03 Work Report May.pdf |-- 2020-05-03 Work Report May.docx viewed directly.
|-- 2020-05-28 Model Summary.lyx |-- 2020-06-03 Work Report May.pdf
|-- 2020-06-28 Model Summary.pdf |-- 2020-06-13 Budget Report.ods
|-- 2020-06-13 Budget Report.ods '-- 2020-06-13 Budget Report.pdf
'-- 2020-06-13 Budget Report.pdf
Documents/2020-05-28 Model Summary/ <-- Choice between redundant use of
|-- 2020-05-28 Model Summary.lyx name (inconvenient, quickly
|-- 2020-06-28 Model Summary.pdf inconsistent) and generic names
|-- overview.eps like document.lyx, which integrate
|-- pasted-image-1.png badly with mailing documents and
'-- pasted-image-2.png desktop search. Also, no cleanup
of unused attachments.

There is currently no way to do that in LyX. I agree this would be nice. I thought there was an enhancement request already filed for this but I couldn't find it. Feel free to open another one at https://www.lyx.org/trac.
The only current feature even loosely related to what you want is that you can go to File > Export > LyX Archive. This tries to put all of the dependencies (e.g., graphics) in one archive. I know this is not what you're looking for, but I mention it because you brought up the example of wanting to send one file as an email to someone.

Related

Trying to import data into R in a way that will allow anyone to access it when opening the markdown file/ accessing the html knit

I am currently working on a coding project and I am running into trouble with how i Should import the data set. We are supposed to have it read in a way so that our instructor can access our markdown file and be able to import the data and run the code without changing file paths. I know about using relative file paths to make it accessible to anyone, however I don't know how to get around the /users/owner part of the file path. Any help would be greatly appreciated and if you have any further questions feel free to ask.
I've tried changing the working directory to a certain folder that both I and my instructor have named the same thing, however, like I said above, when I use read.csv to import the data frame I am still forced to use the /users/owner filepath which obviously is specific to my computer.
I can understand your supervisor, I request the same from my students. My recommended solution is to put both data and R script (or the .Rmd file) in the same folder. Then one does not need to add a path in the read.csv (or similar) function.
If you use RStudio, move to the folder in the Files pane and then use the gear icon and select "Set as Working Directory".
Then send both files (.R or .Rmd) and the data to the supervisor, ideally as a zip file. The supervisor can then unpack it to an arbitrary folder and just double click to the .R/.Rmd file. The containing folder will then automatically become the working directory.
Other options are:
to use a subfolder for the data or
to put the data to a publicly readable internet location, e.g.
Github and read it directly from there.
The last option requires of course that the data have a free license.

Schema file does not exist in XBRL Parse file

I have downloaded a zip file containing around 200,000 html files from Companies House.
Each file is in one of two formats: 1) inline XBRL format (.html file extension) or 2) XBRL format (.xml file extension). Looking at the most recent download available (6 December 2018) all the files seem to be the former format (.html file extensions).
I'm using the XBRL package in R to try and parse these files.
Question 1: is the XBRL package meant to parse inline XBRL format (.html) files, or is it only supposed to work on the XBRL (.xml) formats? If not, can anyone tell me where to look to parse inline XBRL format files? I'm not entirely sure what the difference is between inline and not inline.
Assuming the XBRL package is meant to be able to parse inline XBRL format files, I'm hitting an error telling me that the xbrl.frc.org.uk/FRS-102/2014-09-01/FRS-102-2014-09-01.xsd file does not exist. Here's my code:
install.packages("XBRL")
library(XBRL)
inst <- "./rawdata/Prod224_0060_00000295_20171130.html" # manually unzipped
options(stringsAsFactors = FALSE)
xbrl.vars <- xbrlDoAll(inst, cache.dir = "XBRLcache", prefix.out = NULL, verbose = TRUE)
and the error:
Schema: ./rawdata/https://xbrl.frc.org.uk/FRS-102/2014-09-01/FRS-102-2014-09-01.xsd
Level: 1 ==> ./rawdata/https://xbrl.frc.org.uk/FRS-102/2014-09-01/FRS-102-2014-09-01.xsd
Error in XBRL::xbrlParse(file) :
./rawdata/https://xbrl.frc.org.uk/FRS-102/2014-09-01/FRS-102-2014-09-01.xsd does not exists. Aborting.
Question 2. Can someone explain what this means in basic terms for me? I'm new to XBRL. Do I need to go and find this xsd file and put it somewhere? It seems to be located here, but I have no idea what to do with it or where to put it.
Here's a similar question that doesn't seem fully answered and the links are all in Spanish and I don't know Spanish.
Once i've been able to parse one single html XBRL file, my plan is to figure out how to parse all XBRL files inside multiple zip files from that website.
I had the exactly same problem with the US SEC data.
And I just followed exactly the guidance of pdw and it worked!
FYI, the code I used for
if (substr(file.name, 1, 5) != "http:") {
is
if (!(substr(file.name, 1, 5) %in% c("http:", "https"))) {
And I hacked it using trace('XBRL', edit=TRUE).
I'm not familiar with the XBRL package that you're using, but it seems clear that it's erroneously trying to resolve an absolute URL (https://...) as a local file.
A quick browse of the source code reveals the problem:
XBRL.R line 305:
fixFileName <- function(dname, file.name) {
if (substr(file.name, 1, 5) != "http:") {
[...]
i.e. it decides whether or not a URL is absolute by whether it starts "http:", and you URL starts "https:". It's easy enough to hack in a fix to allow https URLs to pass this test too, and I suspect that that would fix you immediate problem, although it would be far better if this code used a URL library to decide if a URL was absolute or not rather than guessing based on protocol.
I'm not sure what the status is with respect to iXBRL documents. There's a note in the changelog saying "reported to work with inline XBRL documents" which I'm suspicious of. Whilst it might correctly find the taxonomy for an inline document, I can't see how it would correctly extract the facts with significant additional code which I can't see any sign of.
You might want to take a look at the Arelle project as an alternative open source processor that definitely does support Inline XBRL.
As pdw stated, the issue is that the package is hard coded to look for "http:" and erroneously treats "https" paths as local paths. This happens because XBRL files can refer to external files for standard definitions of schemas, etc. In your example, this happens on line 116 of Prod224_0081_00005017_20191231.html
Several people have forked the XBRL package on github and fixed this behavior. You can install one of the versions from https://github.com/cran/XBRL/network/members with devtools::install_git() and that should work out.
For example, using this fork the example Companies House statement is parsed.
# remotes:::install_github("adamp83/XBRL")
library(XBRL)
x <- xbrlDoAll("https://raw.githubusercontent.com/stackoverQs/stackxbrlQ/main/Prod224_0081_00005017_20191231.html",cache.dir = "cache" verbose=TRUE))
Here are a few more general explanations to give some context.
Inline XBRL vs. XBRL
An XBRL file, put simply, is just a flat list of facts.
Inline XBRL is a more modern version of an XBRL instance that, instead of storing these facts as a flat list, stores the facts within a human-readable documents, "stamping" the values. From an abstract XBRL-processing perspective, both an XBRL file and an inline XBRL file are XBRL instances and are simply sets of facts.
DTS
An XBRL instance (either inline or not) is furthermore linked to a few, or a lot of, taxonomy files known to XBRL users as the DTS (Discoverable Taxonomy Set). These files are either XML Schema files (.xsd) containing the report elements (concepts, dimensions, etc) or XML Link files (.xml) containing the linkbases (graphs of reports elements, labels, etc).
The machinery linking an XBRL instance to a DTS is a bit complex and heterogeneous: schema imports, schema includes, simple links pointing to other files, etc. It suffices to understand as a user that the DTS is made of all the files in the transitive closure of the instance via these links. It is the job of an XBRL processor (including the R package) to resolve the entire DTS.
Storage of DTS files
Typically, an XBRL instance points to a file (called entry point) located on the server of the taxonomy provider, and that file may itself point to further files on the same, and other servers.
However, many XBRL processors automatically cache these files locally in order to avoid overloading the servers, as is established practice. Normally, you do not need to do this yourself. It is very cumbersome to resolve the links oneself to download all files manually.
An alternate way is to download the entire DTS (as a zip file following a packaging standard) from the taxonomy provider's servers and use it locally. However, this also requires an XBRL processor to figure out the mapping between remote URLs and local files.

How to transfer a structure from one Plone to another

I have a Plone instance which contains some structures which I need to copy to a new Plone instance (but much more which should not be copied). Those structures are document trees ("books" of Archetypes folders and documents) which use resources (e.g. images and animations, by UID) outside those trees (in a separate structure which of course contains lots of resources not needed by the ones which need to be copied).
I tried already to copy the whole data and delete the unneeded parts, but this takes very (!) long, so I'm looking for a better way.
Thus, the idea is to traverse my little forest of document trees and transfer them and the resources they need (sparsely rebuilding that separate structure) to the new Plone instance. I have full access to both of them.
Is there a suggested way to accomplish this? Or should I export all of them, including the resources structure, and delete all unneeded stuff afterwards?
I found out that each time that I make this type of migration by hand, I make mistakes that force me to do it again.
OTOH, if migration is automated, I can run it, find out what I did wrong, fix the migration, and do it all over again until I am satisfied.
In this context, to automate your migration, I advise you to look at collective.transmogrifrier.
I recommend jsonmigrator - which is a twist on collective.transmogrifier mentioned by Godefroid. See my blog on it here
You can even use it to migrate from Archetypes to Dexterity types (you just need matching fieldnames (and matching types roughly speaking).
Trying to select the resources to import will be tricky though. Perhaps you can find a way to iterate through your document trees & "touch" (in a unix sense) any resource that you are using. Then copy across only resources whose "timestamp" indicates that they have been touched.

Disable typechecking from .hhconfig

Assume we have a project with the following structure:
root/
.hhconfig
├── directory1
├── directory2
├── directory3
.........................
├── directory10
Is there a way to have a single .hhconfig file, and exclude onlydirectory8 from the typechecking? I think it would be really painful to put separate .hhconfig files, inside every directory or declare as UNSAFE all the files on directory8 in order to be excluded from the typechecking.
This is not supported. A Hack project is designed to be checked as a single project, with full analysis going across all of the different parts of it. If it doesn't typecheck as a whole, then the behavior of HHVM on it is undefined.
You should really, really carefully consider why you're trying to exclude part of the project from typechecking. You really shouldn't have a large body of type-incorrect code. You may want to consider leaving that code back in PHP -- it sounds unlikely to be valid Hack code, or to become such soon. Hiding these type errors is crippling the typechecker's ability to help you find problems in the other code in your project.
You may also be able to use a different mode, decl mode which will exclude all the code in a file from having function bodies typechecked (but which will still make the definitions available to other files). But again, this is just shoving a problem under the rug. Ideally you'd fix all of the type errors instead!
Also, definitely don't put separate .hhconfig files in each directory -- they'll be checked as separate subprojects and none of the analysis will look across the borders of the subdirectories!

Best backup strategy for checked-out and hijacked files in all ClearCase VOBs and views

Our policy here is that only "most important" CCase views are backed up.
All the important data are considered to be in the VOBs and also under non-CCase directories, but never in views.
However, an special case are the checked-out files in views.
People quite very often forget that they became private files in their dynamic view.
Some times they cannot be found easily (or at all) under the dynamic view storage area.
In snapshot views hijacked elements may become also important.
What is the best strategy to find and backup all those files only (checked-out / hijacked) in every (dynamic / snapshot) view and VOB ?.
(It should be possible to script it in very few lines, i think, ct lsco, ct lspriv ...).
Thank you very much in advance, Javier.
(FJCobas, Spain).
The idea is to use the SO question "Command line to delete all ClearCase view-private files", adapting it to select only checkout, hijacked and/or eclipsed files.
With Unix:
cleartool ls -r -nxn | grep -e "(CHECKEDOUT|hijacked|eclipsed)"
Note: as mentioned in the SO question "ClearCase: Backup for only modified checked-out elements in all views", an optimized solution would check if a checkout file actually introduced any changes. But if you have lots of checkouts, this wouldn't scale: a full copy (of all files) every time will be faster.
You can then copy them in a safe backup location.

Resources