iexpress command line example to create EXE packages - iexpress

I need help with Example for Use iexpress command line to create EXE package.
I have a folder With multiple files and folders inside, and I want to create a single EXE file from this folder. Is it possible to help me with an example of how to create a command line for such a thing.

IExpress.exe uses SED files, which are really just text files that describe the parameters used when building the package. To build a self-extracting installer on the command line, you just run IEXPRESS with the SED file as an argument:
iexpress /N Your_SED_Script.sed
The /N is to invoke unattended package building. Without it, the IExpress GUI wizard will simply pop up.
You can generate SED files by going through the IExpress wizard, or you can try to generate them automatically with some of your own code.
Let's look at the structure of an SED script to get you started.
Below is an example of an SED file I generated by going through the IExpress.exe GUI wizard once.
Most of these options aren't critical, but in the lower half you'll see TargetName, which specifies the filename of the resulting self-extracting package. FILE0, FILE1, FILE2 specify files in the package. [SourceFiles] begins the section that describes where IExpress should look for the files.
Source File Part
FILE0="TestProgram.exe"
FILE1="TestData.dat"
FILE2="TestLibrary.lib"
[SourceFiles]
SourceFiles0=C:\Users\user\Documents\Visual Studio 2010\Projects\TestProject\Debug\
SourceFiles1=C:\Users\user\Documents\Visual Studio 2010\Projects\TestProject\Debug\lib\
[SourceFiles0]
%FILE0%=
%FILE1%=
[SourceFiles1]
%FILE2%=
Here we have two different locations, defined as SourceFiles0 and SourceFiles1. They each get their own sub-section, [SourceFiles0] and [SourceFiles1], below which are references to each of the files in those locations.
[Strings]
.
.
.
AppLaunched=TestProgram.exe
The AppLaunched parameter in the [Strings] section sets the file to run after extraction. Below it just contains the executable TestProgram.exe, but you can set batch files (*.bat) to run after extraction. If Applaunched is empty, the package will just extract the files.
There are a few resources available online, but I'll admit it was pretty hard to find any information about how to build self-extracting packages as opposed to just opening them. The Wikipedia entry is a good starting point.
Wikipedia - IExpress
SED Overview
Full SED Script
[Version]
Class=IEXPRESS
SEDVersion=3
[Options]
PackagePurpose=InstallApp
ShowInstallProgramWindow=0
HideExtractAnimation=1
UseLongFileName=1
InsideCompressed=0
CAB_FixedSize=0
CAB_ResvCodeSigning=0
RebootMode=I
InstallPrompt=%InstallPrompt%
DisplayLicense=%DisplayLicense%
FinishMessage=%FinishMessage%
TargetName=%TargetName%
FriendlyName=%FriendlyName%
AppLaunched=%AppLaunched%
PostInstallCmd=%PostInstallCmd%
AdminQuietInstCmd=%AdminQuietInstCmd%
UserQuietInstCmd=%UserQuietInstCmd%
SourceFiles=SourceFiles
[Strings]
InstallPrompt=
DisplayLicense=
FinishMessage=
TargetName=C:\Users\user\Documents\TestSED.exe
FriendlyName=All your SEDs are belong to us
AppLaunched=TestProgram.exe
PostInstallCmd=<None>
AdminQuietInstCmd=
UserQuietInstCmd=
FILE0="TestProgram.exe"
FILE1="TestData.dat"
FILE2="TestLibrary.lib"
[SourceFiles]
SourceFiles0=C:\Users\user\Documents\Visual Studio 2010\Projects\TestProject\Debug\
SourceFiles1=C:\Users\user\Documents\Visual Studio 2010\Projects\TestProject\Debug\lib\
[SourceFiles0]
%FILE0%=
%FILE1%=
[SourceFiles1]
%FILE2%=

iexpress /N /Q NameOfSedFile.SED

Use the IExpress wizard to create a SED file once. In step "Packaged files" you can specify all files to be zipped.
Then use
iexpress.exe /N sed_file_name

Related

The system cannot find the file specified in Rmarkdown

I am doing my current project in RStudio Desktop. I am doing RMarkdown to later transfer. I am having some trouble getting an error of the system cannot find the file specified RMarkdown. At first, it says that the combined_databike was not found, but I literally did it in the same file already also you can see in the upper right all of the data frames which mention "combined_databike." This being said when I am trying to hit knit it gives me the error. Now the error says is at the tripdata_202006, which I cannot understand because I imported every file from tripdata_202006 to tripdata_202105 using the "import dataset."
I want to understand why is not working and how I bring a solution.
That's because the file you want to read is not in the folder where your project or in this case your rmarkdown file is.
As you can see in the file list of your console, in your directory you have 4 files, and you don't have the folder "Bike data" where the file 202006-divvy-tripdata.csv is located, according to the path that is in line 83 of your code.
Maybe to solve that, I think you have two options, the first one is to write the whole path to the folder location and then to the file. And the second option is to move the folder "Bike data" with the files you have in it, where your rmarkdown file is located.

Understanding R console vs writing R code in text file

What is the difference between using R console vs writing R code in a text file? I wrote this question on Kaggle but there were no previous questions on this matter.
When you supply code via text file (.R file) you "run the file with R" without visualizing it and it can stop somewhere due to error i.e. (which can be handled, etc.). Also running an .R file with R (for example via .bat file) generates a .Rout file, which is basically a print out of the console and some aditional info like runtime, etc.
If you feed the code in the console, each line is treated independently: even if there is an error you can process an aditional line (if it depends on the failed comand then it will fail also though) and you get to see each result as soon as the comand is run. In comparision to the .R file you will have no copy of the code other than that stored in the session - meaning you will end up needing to save to disk the code you have written if you want it to persist between session. Now you can choose to use whatever text format you like for this task from simple .txt to .docx BUT if you use .R format you can manipulate with notepad++ or the notepad editor and still run/complipe the file with R (via .bat file for example). In case of opting against .R file to store the written code, you will have to feed it to the console again to run.
In R Studio you can open .R files and manage (extend, correct) your code and feed it comand per comand or as a block to the console. So one could say you use .R files to manage you code, having the possiblity to compile/run these .R files directly with R to execute on a button click or repeatedly for example.
Not sure if that is what you are looking for?

is there a way to crop multiple netcdf files using CDO?

I have multiple global climate model (GCM) data files. I successfully cropped one file but it takes time to crop one by one over thousand data files manually. How can I do multiple crops? Thanks
What you need is a loop combined with some patience... ;-)
for file in $(ls *.nc) ; do
cdo sellonlatbox,lon1,lon2,lat1,lat2 $file ${file%???}_crop.nc
done
the %??? chops off the ".nc" and then I add "_crop" to the output file name...
I know I am a little late to add to the answer, but still wanted to add my knowledge for those who would still pass through this question.
I followed the code given by Adrian Tompkins and it seems to work exceptionally fine. But there are somethings to be considered which I'd like to highlight. And because I am still novice at programming, please forgive my much limited answer. So here are my findings for the code above to work...
The code used calls CDO (Climate Data Operators) which is a non-GUI standalone program that can be utilized in Linux terminal. In my case, I used it in my Windows 10 through WSL (Ubuntu 20.04 LTS). There are good videos in youtube for using WSL in youtube.
The code initially did not work for until I made a slight change. The code
for file in $(ls *.nc) ; do
cdo sellonlatbox,lon1,lon2,lat1,lat2 $file ${file%???}_crop.nc
done
worked for me when I wrote it as
for file in $(ls *.nc) ; do
cdo sellonlatbox,lon1,lon2,lat1,lat2 $file ${file%???}_crop.nc;
done
see the presence of a ; in the code in line 2.
The entire code (in my case to work) was put in a text file (can be put as script in other formats as well) and passed as a script in the Linux terminal. The procedure to execute the script file is as follows:
3.a) create the '.txt' file containing the script above. Do note that the directory should be checked in all steps.
3.b) make the file executable by running the command line
chmod +x <name_of_the_textfile_with_extension> in the terminal.
3.c) run the script (in my case it is the textfile) by running the command line ./<name_of_the_textfile_with_extension>
The above procedures will give you cropped netcdf files for the corresponding netcdf files in the same folder.
cheers!

How can I avoid hardcoding a file path?

I am using RStudio to knit an .Rnw file to .pdf. This .Rnw file is stored in directory that is under git version control. This directory also contains a .RProj file for the project.
I collaborate with colleagues who don't know the first thing about .Rnw files and git. These colleagues want to open a Word file and track change their hearts out. So I give the people what they want.
Everyone needs access, so storing the Word file on a cloud service like Box makes sense. In the past I created a subfolder in my repo that I shared—keeping everything within the root directory—but this time around I needed to store the file in a shared folder that someone else created. So my solution was to copy the Word file from this shared directory to my repository.
Technical Approach
I don't know how to make this a reproducible problem, but hopefully you will give me some latitude since I'm trying to make my work fully reproducible ;)
Let's say that my .Rnw file is stored in repoRoot/subfolder. Since knitr changes the working directory to subfolder where this .Rnw file is located, the first chunk sets the root.dir one level up at the project root.
<<knitr, include=FALSE>>=
library(knitr)
opts_knit$set(root.dir=normalizePath('../')) # go up 1 level
#
The next chunk copies the Word file from the shared folder to my git repo and runs the analysis file. The shared directory path is hard coded to my machine, which is the problem I'm writing for your help solving.
file.copy(from='/Users/ericpgreen/Box Sync/Project/Paper/draft.docx',
to='subfolder/draft.docx', # my repo
overwrite=TRUE)
source(scripts/analysis.R) # generates objects we reference in the .docx file
After adding \begin{document}, I include a chunk where I convert the .docx file to .txt and then rename it to .Rnw.
# convert docx to txt
system("textutil -convert txt 'subfolder/draft.docx'")
# rename txt to .Rnw
file.rename('subfolder/draft.txt',
'subfolder/draft.Rnw')
The next child chunk calls this .Rnw file that contains the text of the Word file with references to R objects included through \Sexpr{}:
<<include-draft, child='draft.Rnw', include=FALSE>>=
#
This works just fine for me. Whenever I knit the .Rnw file it grabs the latest version of the .docx file that my colleagues have edited (complete with track changes and comments) and, in a later step not shown here, returns the .pdf file to the shared folder.
Problem to Solve
This setup meets almost every need for me, except that the initial file.copy() command is hard coded to my machine. So if someone in my group clones my repo (e.g., research assistants who DO use version control), it won't run out of the box. Is there a workaround to hard coding in this type of case?
Ultimately you won’t get around hard-coding paths that are outside your control, such as paths to network shares. What you can and should avoid is hard-coding these paths in your documents.
Instead, relegate them to configuration files and/or environment variables (which, again, will be controlle by configuration files, to with .bashrc and similar). The simplest approach is then to use
network_share_path = Sys.getenv('NETWORK_SHARE_PATH',
stop('no network share path configured'))
file.copy(from = network_share_path, to = 'subfolder/draft.docx', overwrite = TRUE)

How to extract archives with same folder structure so that folder contents are merged?

I want to be able to create 2 or more tar.gz (or other) archives that will have some overlapping directories, but not overlapping files. I would like to extract my 2 or more archives into the same working area and have the contents of overlapping directories overlaid or merged, rather than have a situation where I get 'Folder' and 'Folder (2)'
More detailed example:
arch_1.tar.gz contains the following
Project
Documentation
README_1.md
Code
app_xyz.c
server_xyz.c
And arch_2.tar.gz contains this
Project
Documentation
README_2.md
Code
app_abc.c
server_abc.c
Now I want to be able to extract the 2 example tar.gz archives from above and end up with this:
Project
Documentation
README_1.md
README_2.md
Code
app_abc.c
app_xyz.c
server_abc.c
server_xyz.c
But right now, when I test this, I get this (not desirable):
Project
Documentation
README_1.md
Code
app_xyz.c
server_xyz.c
Project (2)
Documentation
README_2.md
Code
app_abc.c
server_abc.c
Someone at work described this to me a while ago and it sounded great, but I haven't been able to implement it. Maybe it's just a matter of different options on the command line when I go do the extract.
In case it matters, I am using Windows 7 on 1 machine that will be creating the tar.gz files but the extraction will likely occur in Mint Linux (all other dev machines).
Use the command line tar utility rather than the default GUI option provided by your operating system.
tar -xvf arch_1.tar.gz
tar -xvf arch_2.tar.gz
does exactly what I wanted to do.
When I tried right-click > extract using Mint Linux it did not merge the folder contents.

Resources