Formatting a map[] in golang - dictionary

I have a list of hosts inbound in the form of one string separated by commas.
EXAMPLE: "host01,host02,host03,"
I have this line that was an array of strings but I need it to be a map[string]interface{}
Here is what it is how do I make it a map[string]interface{}?
• Removing the trailing or any trailing comma.
hosts := []string{strings.TrimSuffix(hostlist, ",")}
• Later I split them on the comma like this.
hosts = strings.split(hosts[0], ",")
I just need to make it so names are keys and the values are unknown from APIs so an interface{}.
Thanks and forgive me I know this is super simple I am just not seeing it.

Loop over your slice of strings. Set each map entry to nil.
There is no fancy syntax like Python's list comprehensions or Perl's freaky group assignments.
And remember that StackOverflow's tag info is often really useful. See: https://stackoverflow.com/tags/go/info
And from there to the language specification. One bit that will help is https://golang.org/ref/spec#For_range if you aren't familiar with Go's for syntax to loop over slices.

Related

How to set up custom automatic character replacement in emacs ess?

One of the useful features of ess-mode (Emacs speaks statistics) is to automatically replace the underscore _ with the assignment operator <-. Lately, I have been using a lot of pipes (written as %>%) and it would be great to not have to type three characters for each pipe.
Is it possible to define a custom key binding for the pipe, similar to the one converting _ into ->?
The simplest solution is to just bind a key to insert a string:
(define-key ess-mode-map (kbd "|") "%>%")
You can still insert | with C-q |. I'm not sure about the map's name; you'll almost certainly want to limit the key binding to ess-mode.
Check out yasnippet. You can use it to define something like "if this sequence of characters is followed by this key (which you can define to whatever you like), then replace them with this other sequence of characters and leave the cursor in this place". There's more to yasnippet than this, but there's plenty of documentation online and even already made recipes similar to the example I gave above that you can try, like yasnippet-ess-mode, for example.
Alternatively, you can also try abbrev-mode and see if that works for you.
I, for one, like yasnippet better, since you can also specify where to leave the cursor after the expansion, but abbrev-mode seems to be easier to set up. As always in Emacs world, try multiple solutions, don't settle for the first one you put your hands on. What works best for others might not work for you, and vice-versa.

How to process latex commands in R?

I work with knitr() and I wish to transform inline Latex commands like "\label" and "\ref", depending on the output target (Latex or HTML).
In order to do that, I need to (programmatically) generate valid R strings that correctly represent the backslash: for example "\label" should become "\\label". The goal would be to replace all backslashes in a text fragment with double-backslashes.
but it seems that I cannot even read these strings, let alone process them: if I define:
okstr <- function(str) "do something"
then when I call
okstr("\label")
I directly get an error "unrecognized escape sequence"
(of course, as \l is faultly)
So my question is : does anybody know a way to read strings (in R), without using the escaping mechanism ?
Yes, I know I could do it manually, but that's the point: I need to do it programmatically.
There are many questions that are close to this one, and I have spent some time browsing, but I have found none that yields a workable solution for this.
Best regards.
Inside R code, you need to adhere to R’s syntactic conventions. And since \ in strings is used as an escape character, it needs to form a valid escape sequence (and \l isn’t a valid escape sequence in R).
There is simply no way around this.
But if you are reading the string from elsewhere, e.g. using readLines, scan or any of the other file reading functions, you are already getting the correct string, and no handling is necessary.
Alternatively, if you absolutely want to write LaTeX-like commands in literal strings inside R, just use a different character for \; for instance, +. Just make sure that your function correctly handles it everywhere, and that you keep a way of getting a literal + back. Here’s a suggestion:
okstr("+label{1 ++ 2}")
The implementation of okstr then needs to replace single + by \, and double ++ by + (making the above result in \label{1 + 2}). But consider in which order this needs to happen, and how you’d like to treat more complex cases; for instance, what should the following yield: okstr("1 +++label")?

String continuation across multiple lines, no newline characters

Am using the RODBC library to bring data into R. I have a long query that I want to pass a variable to, much like this SO user.
Problem is that R interprets the whitespace/carriage returns in my query as a newline '\n'.
The accepted solution for this question suggests to simply break up the text into chunks and then paste() together - which works, but ideally I'd like to keep the whitespace intact - makes it easier to test/verify the behavior of the query over in the database before pasting into R.
In other languages I'm familiar with there's a simple line continuation character - indeed, several of the comments on the accepted answer are looking for an approach similar to python's \.
I found an aside to a workaround using strwrap deep in the bowels of an R discussion lists, so in the interest of making the internet better I will post it here. However, if someone can point the direction toward a more elegant/straightforward solution, I will happily accept your answer.
I don't know if you will find this helpful or not, but I have eventually gravitated towards keeping my SQL separate from my R scripts. Keeping the query in my R script, except for very very short ones, I find gets unreadable very quickly.
These days, I tend to keep queries that are more than a single line in their own separate .sql file. Then I can keep them nice and formatted and readable in a nice text editor, and read them into R as needed via something like this:
read_sql <- function(path){
stopifnot(file.exists(path))
sql <- readChar(path,nchar = file.info(path)$size)
sql
}
For binding parameters into the queries, I just keep a %s where the parameter will go in the .sql file, and then add in the parameters in R using sprintf.
I've been much happier this way, as I was finding that cluttering up my R scripts with really long paste statements and multi-line character objects was making my code really hard to read.
R's strwrap will destroy whitespace, including newline characters, per the documentation.
Essentially, you can get the desired behavior by initially letting R introduce line breaks/newline \ns, and then immediately stripping them out.
#make query using PASTE
query_1 <- paste("SELECT map.ps_studentid
,students.first_name || ' ' || students.last_name AS full_name
,map.testritscore
,map.termname
,map.measurementscale
FROM map$comprehensive_with_growth map
JOIN students
ON map.ps_studentid = students.id
WHERE map.termname = '",map_term,"'", sep='')
#remove newline characters introduced above.
#width is an arbitrary big number-
#it just needs to be longer than your string.
query_1 <- strwrap(query_1, width=10000, simplify=TRUE)
#execute the query
map_njask <- sqlQuery(XE, query_1)
query <- gsub(pattern='\\s',replacement="",x=query)
Try using sprintf to get variable substitution, and then replacing all newlines and whitespace.
See my answer to a similar question for details.

Programmatically getting a list of variables

Is it possible to get a list of declared variables with a VimL (aka VimScript) expression? I'd like to get the same set of values that will be presented for a command using -complete=expression. The goal is to augment that list for use in a user-defined command completion function.
You can use g: as a dictionary that holds all global variables, so:
let globals = keys(g:)
will give you all the names. The same applies to the other scopes: b:, s:, w:, etc. See :help internal-variables for the complete list.
You can get something similar using keys of g:, b:, t:, w: and v: dictionaries, but beware of the following facts:
There is no equivalent to this dictionaries if you want to complete options.
Some variables like count (but not g:count or l:count), b:changedtick and, maybe, others are not present in this dictionaries.
Some vim hacker may add key ### to dictionary g:, but it won't make expression g:### valid variable name (but adding 000 there will). Though g:["###"] will be a valid expression.

Word count of a string

How to count the words in a document, get the result same as the result of MS OFFICE?
In theory you'd first have to define what you see as a word (see also Jason Williams' post). Then you open the document with whatever language you're planning to use for this. You translate the document from Microsoft's proprietary format to something nice and clean.
Then its simply a matter of counting the occurrences of the afore mentioned word definition.
The hard part here will be the parsing of the office document. Luckily for you, Microsoft has relceased their proprietary format specification!
Its a bit long winded, but perhaps you can find somebody who has done the hard work for you, or you can try doing it from scratch.
Alternatively, if you're willing to reveal what language you're planning on using and what operating system, things can be a lot easier (if you're on Windows and have Office installed, for example, you can use OLE plug-ins.)
Also, have a look at this blog post about that format of Office documents featuring some helpful information (courtesy of google)
Without knowing your environment all I can tell you is that you would need to implement something like this:
Take the entire document as a string.
Split the string on whitespace.
The number of items in the resulting sequence will be the number of words in the document.
Basic word splitting uses whitespace and punctuation (.,?!"'- etc - indeed any non-alphanumeric or character usually) characters to split the words.
Make sure you skip sequences of punctuation/whitespace instead of counting extra "words" between them.
You will have to decide whether numbers are "words" or not. And whether "$123,456.78" is one word or three.
You may also want to apply other rules - for example, if you are looking for words in source code, you may wish to treat +-=*/()&^%$ characters as "whitespace". If you have identifiers in camelCase or PascalCase styles, you may want to take the "words" you have found and check if they have uppercase characters in the middles or the words.
Fundamentally, it's an easy problem - you just have to decide what a "word" is. You can be as simple or as complicated as you like about it.
The best way to get the same word count as Office would be to use macros or automation to use MS Word to load the text and calculate the word count.
If you take the whole document as a String, this code (in java) may work for you:
private int wordCount(String str){
String[] words = str.trim().split("\\s+");
for (int i = 0; i < words.length; i++) {
words[i] = words[i].replaceAll("[^\\w]", "");
}
return words.length;
}

Resources