I am working on a demo notebook. While working on it I am inserting commands and sometimes going back at a previous cell to insert more commands. At the end, I want the command numbering to be increasing order, not the random order I entered them while preparing the notebook.
Any way to conveniently do this from the notebook? I can go and edit the .ipynb file associated with the notebook and edit the "prompt_number" fields to make the command ordering as I want, but a more convenient way is better.
Based on no responses, I am posting the editing of the underlying .ipynb file as the answer to rearrange numbering.
When you edit the ipynb file, note that most of the time the numbers are paired (output and input):
"output_type": "pyout",
"prompt_number": 35,
"text": [
"Empty DataFrame\n",
"Columns: [utilization]\n",
"Index: []"
]
}
],
"prompt_number": 35
},
vim tip: I did a /prompt_n/b+16 search on vim which takes you to the beginning of the number, then just doing cw to the new number you want works. For the second number of the pair, just do .
Note: Be careful about the paired numbers, I found the first one is the output one and ends with a comma. The second one is the input one without a comma. But if you have intentionally deleted a particular in/out cell from the web interface, the pair may be missing so be careful to put same new number to same in/out pair, and not give same number to different in/out cells, which will garble the notebook and web interface will complain about errors.
Related
Good morning.
First things first: I know next to nothing about shell scripting in Unix, so please pardon my naivety.
Here's what I'd like to do, and I think it's relatively simple: I would like to create a .ksh file to do two things: 1) take a user-provided numerical value (argument) and paste it into a new column at the end of a dataset (a separate .txt file), and 2) execute a different .ksh script.
I envision calling this script at the Unix prompt, with the input value added thereafter. Something like, "paste_and_run.ksh 58", where 58 would populate a new, final (un-headered) column in an existing dataset (specifically, it'd populate the 77th column).
To be perfectly honest, I'm not even sure where to start with this, so any input would be very appreciated. Apologies for the lack of code within the question. Please let me know if I can offer any more detail, and thank you for taking a look.
I have found the answer: the "nawk" command.
TheNumber=$3
PE_Infile=$1
Where the above variables correspond to the third and first arguments from the command line, respectively. "PE_Infile" represents the file (with full path) to be manipulated, and "TheNumber" represents the number to populate the final column. Then:
nawk -F"|" -v TheNewNumber=$TheNumber '{print $0 "|" TheNewNumber/10000}' $PE_Infile > $BinFolder/Temp_Input.txt
Here, the -F"|" dictates the delimiter, and the -v dictates what is to be added. For reasons unknown to myself, the declaration of a new varible (TheNewNumber) was necessary to perform the arithmetic manipulation within the print statement. print $0 means that the whole line would be printed, while tacking the "|" symbol and the value of the command line input divided by 10000 to the end. Finally, we have the input file and an output file (Temp_PE_Input.txt, within a path represented by the $Binfolder variable).
Running the desired script afterward was as simple as typing out the script name (with path), and adding corresponding arguments ($2 $3) afterward as needed, each separated by a space.
For example, I have many HTML tabs to style, they use different classes, and will have different backgrounds. Background images files have names corresponding to class names.
The way I found to do it is yank:
.tab.home {
background: ...home.jpg...
}
then paste, then :s/home/about.
This is to be repeated for a few times. I found that & can be used to repeat last substitute, but only for the same target string. What is the quickest way to repeat a substitute with different target string?
Alternatively, probably there are more efficient ways to do such a thing?
I had a quick play with some vim macro magic and came up with the following idea... I apologise for the length. I thought it best to explain the steps..
First, place the text block you want to repeat into a register (I picked register z), so with the cursor at the beginning of the .tab line I pressed "z3Y (select reg z and yank 3 lines).
Then I entered the series of VIM commands I wanted into the buffer as )"zp:.,%s/home/. (Just press i and type the commands)
This translate to;
) go the end of the current '{}' block,
"zp paste a copy of the text in register z,
.,%s/home/ which has two tricks.
The .,% ensures the substitution applies to everything from the start of the .tab to the end of the closing }, and,
The command is incomplete (ie, does not have a at the end), so vim will prompt me to complete the command.
Note that while %s/// will perform a substitution across every line of the file, it is important to realise that % is an alias for range 1,$. Using 1,% as a range, causes the % to be used as the 'jump to matching parenthesis' operator, resulting in a range from the current line to the end of the % match. (which in this example, is the closing brace in the block)
Then, after placing the cursor on the ) at the beginning of the line, I typed "qy$ which means yank all characters to the end of the line into register q.
This is important, because simply yanking the line with Y will include a carriage return in the register, and will cause the macro to fail.
I then executed the content of register q with #q and I was prompted to complete the s/home/ on the command line.
After typing the replacement text and pressing enter, the pasted block (from register z) appeared in the buffer with the substitutions already applied.
At this point you can repeat the last #qby simple typing ##. You don't even need to move the cursor down to the end of the block because the ) at the start of the macro does that for you.
This effectively reduces the process of yanking the original text, inserting it, and executing two manual replace commands into a simple ##.
You can safely delete the macro string from your edit buffer when done.
This is incredibly vim-ish, and might waste a bit of time getting it right, but it could save you even more when you do.
Vim macro's might be the trick you are looking for.
From the manual, I found :s//new-replacement. Seemed to be too much typing.
Looking for a better answer.
I connect to different types of computers every day. When I Telnet in, the first thing I do is run a command line script that is about 1150 characters long. I have no problem with Linux based systems, but if it is Unix based (ie IRIX), then my command is truncated at ~256 Chars.
The Final result of the Command will be a data dump (the results of the commands) to the Telnet window. This data will then be copied and pasted into a tool for analysis. Also the Command string that is being entered is a series of Commands (mostly egreps) separated by semi-colons, but when combined together it gets very long.
I need to be able to enter all 1150 Chars on the command line. The systems I access are not mine, So I need to be as Benign as possible when interacting with them.
Your Help is appreciated.
If its a parameter list thats making the command that long then xargs is your friend
I'm not sure if this is the answer you're looking for, but as you stated in your comment, all of the commands are less than 256 characters. So, you can break the commands up into 5-6 groups being sure to only separate at the semi-colon (not at pipes). Then execute each group in sequence. It's more work if your use to just copying and pasting, but not much if you already have the groups created in a text file.
I am using the fread function in R for reading files to data.tables objects.
However, when reading the file I'd like to skip lines that start with #, is that possible?
I could not find any mention to that in the documentation.
fread can read from a piped command that filters out such lines, like this:
fread("grep -v '^#' filename")
Not currently, but it's on the list to do.
Are the # lines at the top forming a header which is more than 30 lines long?
If so, that's come up before and the solution is :
fread("filename", autostart=60)
where 60 is chosen to be inside the block of data to be read.
From ?fread :
Once the separator is found on line autostart, the number of columns
is determined. Then the file is searched backwards from autostart
until a row is found that doesn't have that number of columns. Thus,
the first data row is found and any human readable banners are
automatically skipped. This feature can be particularly useful for
loading a set of files which may not all have consistently sized
banners. Setting skip>0 overrides this feature by setting
autostart=skip+1 and turning off the search upwards step.
The default autostart=30 might just need bumping up a bit in your case.
Or maybe skip=n or skip="string" helps :
If -1 (default) use the procedure described below starting on line autostart to find the first data row. skip>=0 means ignore autostart and take line skip+1 as the first data row (or column names according to header="auto"|TRUE|FALSE as usual). skip="string" searches for "string" in the file (e.g. a substring of the column names row) and starts on that line (inspired by read.xls in package gdata).
Setting:
I have (simple) .csv and .dat files created from laboratory devices and other programs storing information on measurements or calculations. I have found this for other languages but nor for R
Problem:
Using R, I am trying to extract values to quickly display results w/o opening the created files. Hereby I have two typical settings:
a) I need to read a priori unknown values after known key words
b) I need to read lines after known key words or lines
I can't make functions such as scan() and grep() work.
c) Finally I would like to loop over dozens of files in a folder and give me a summary (to make the picture complete: I will manage this part)
I woul appreciate any form of help.
ok, it works for the key value (although perhaps not very nice)
variable<-scan("file.csv", what=character(),sep="")
returns a charactor vector of everything
variable[grep("keyword", ks)+2] # + 2 as the actual value is stored two places ahead
returns characters of seaked values.
as.numeric(lapply(variable, gsub, patt=",", replace="."))
for completion: data had to be altered to number and "," and "." problem needed to be solved.
in a line:
data=as.numeric(lapply(ks[grep("Ks_Boden", ks)+2], gsub, patt=",", replace="."))
Perseverence is not to bad of an asset ;-)
The rest isn't finished, yet, I will post once finished.