I opened iphoneMultichannelmixerTest from Apple developer's site in the new xcode upgrade. Lots of fixes, but I'm stuck with one:
data argument not used by format string
This is it:
printf("BUS %%disON %lu\n", inputNum, isONValue);
I really don't have a clue. Can anybody help me to fix this?
(Since there appears a tiny little marker under isONValue, I think that might be the problem?!?)
You have escaped the format option for inputNum by using double % characters. Most likely this is what you want:
printf("BUS %d is ON %lu\n", inputNum, isONValue);
Related
I'm working on solving a puzzle and I ran into a phrase (see in bold below) that seems to be more puzzling than the puzzle itself so far. Here's what the puzzle says:
The password is [64bit hex code here]. Use it to decode the message below.
[Hex string goes here].
(HINT: Text is in hex; Cross the OR with the pass.)
Really puzzled as to what "Cross the OR with the pass." means in English as far decoding is concerned.
I'm happy to share actual hex string and password in direct message if desired.
Any help appreciated!
I solved it, conversely, with XOR by using the short HEX as input and long HEX string as key and it revealed the hidden message! Thanks all, and Maarten, for your comments!
stopwords_tr <- data.frame(word = stopwords::stopwords("tr",source="stopwords-iso"), stringsAsFactors = FALSE)
stopwords_tr
Some characters in stopwords_tr are not in Turkish. For example;
1 acaba
2 acep
3 adamakıllı
4 adeta
5 ait
6 altmýþ <-Here must be: altmış
7 altmış
8 altý <-Here must be: altı
I'm looking for a way to fix them.
stopwords_tr$word<-gsub("ý","ı",stopwords_tr$word)
The result has not changed.
I tried these, but it didn't.
Encoding (stopwords_tr $ word) <- "WINDOWS-1254"
Encoding (stopwords_tr $ word) <- "LATIN-5"
Encoding (stopwords_tr $ word) <- "UTF-8"
Another interesting thing.
When you double-click stopwords_tr in R Studio to display it, the character appears "ý". In Console, it looks like "y".
Is there a parameter to set encoding?
Thanks to everyone.
If you're sure this is an error, I think the best way to fix this is to fix the original source: post an issue to https://github.com/stopwords-iso/stopwords-iso/issues or https://github.com/stopwords-iso/stopwords-tr/issues (not sure which is better; try one, and if you get it wrong, they'll tell you!)
But check that it really is wrong. I don't know Turkish, but when I do a Google search for "altmýþ", I find it on several pages that look like Turkish to me, e.g. https://greatsong.net/PAROLES-ISMAIL-YK,ISTEMIYORUM-SENI,101646494.html. Probably an encoding error, but if it is a common one, maybe you really do want it in the list.
Regarding the display issues: sounds like you're on Windows. R on Windows has issues displaying non-native characters. You probably don't have Icelandic installed, so it will have trouble displaying a word like altmýþ.
I followed #user2554330's advice. However, I applied to a different address than the address he showed.
I contacted the creator of stopwords-tr (Kenneth Benoit). The problem stems from a mis-encoded data source. I also noticed repetitive words and reported them. Together we solved the character problem. stopwords-tr was updated. In the following address;
(Fix Turkish #16)
https://github.com/quanteda/stopwords/pull/16
devtools::install_github("quanteda/stopwords", ref = "fix-tr")
stopwords("tr", source = "stopwords-iso")
"Turkish Stopwords" now seems to be properly encoded.
Greetings..
I am using the dataUsingEncoding(encoding: NSStringEncoding, allowLossyConversion: Bool = default) -> NSData? function to convert String to NSData, but I don't get what allowLossyConversion actually means.
Is it similar to Lossy compression? Can anybody help me understand this?
If flag is YES and the receiver can’t be converted without losing some information, some characters may be removed or altered in conversion. For example, in converting a character from NSUnicodeStringEncoding to NSASCIIStringEncoding, the character ‘Á’ becomes ‘A’, losing the accent.
I'm trying to parse a file that looks sort of hex encoded but mostly not. I contacted support for the vendor who created the file and they said that they it can be parsed using "an 0x116 offset"
What is a 0x116 offset?
It took me 2 weeks to get an answer from the vendor on my first question, so I wanted to see if someone here could help me make sense of! Thank you!
"0x116 offset" means nothing. It could be a value that needs to be added to words or subtracted to remove some naive encoding, or anything else for that matter.
Could you post a part of the file? Is it binary or text? Could you define "mostly not"?
What vendor/software package/device does this file come from?
How to count the words in a document, get the result same as the result of MS OFFICE?
In theory you'd first have to define what you see as a word (see also Jason Williams' post). Then you open the document with whatever language you're planning to use for this. You translate the document from Microsoft's proprietary format to something nice and clean.
Then its simply a matter of counting the occurrences of the afore mentioned word definition.
The hard part here will be the parsing of the office document. Luckily for you, Microsoft has relceased their proprietary format specification!
Its a bit long winded, but perhaps you can find somebody who has done the hard work for you, or you can try doing it from scratch.
Alternatively, if you're willing to reveal what language you're planning on using and what operating system, things can be a lot easier (if you're on Windows and have Office installed, for example, you can use OLE plug-ins.)
Also, have a look at this blog post about that format of Office documents featuring some helpful information (courtesy of google)
Without knowing your environment all I can tell you is that you would need to implement something like this:
Take the entire document as a string.
Split the string on whitespace.
The number of items in the resulting sequence will be the number of words in the document.
Basic word splitting uses whitespace and punctuation (.,?!"'- etc - indeed any non-alphanumeric or character usually) characters to split the words.
Make sure you skip sequences of punctuation/whitespace instead of counting extra "words" between them.
You will have to decide whether numbers are "words" or not. And whether "$123,456.78" is one word or three.
You may also want to apply other rules - for example, if you are looking for words in source code, you may wish to treat +-=*/()&^%$ characters as "whitespace". If you have identifiers in camelCase or PascalCase styles, you may want to take the "words" you have found and check if they have uppercase characters in the middles or the words.
Fundamentally, it's an easy problem - you just have to decide what a "word" is. You can be as simple or as complicated as you like about it.
The best way to get the same word count as Office would be to use macros or automation to use MS Word to load the text and calculate the word count.
If you take the whole document as a String, this code (in java) may work for you:
private int wordCount(String str){
String[] words = str.trim().split("\\s+");
for (int i = 0; i < words.length; i++) {
words[i] = words[i].replaceAll("[^\\w]", "");
}
return words.length;
}