how to sum the digits in an integer using recusion? - recursion

Write a recursive method that computes the sum of the sum of the digits in an integer. use the following method header:
public static int sumDigits(long n)
For example, sumDigits(234) returns 2 + 3 + 4 = 9. Write a real program that prompts the user to enter an integer and displays its sum.

Receive an integer as a parameter
Convert to string
Parse the string's individual characters
Remove a character (first or last doesn't matter)
Put the remaining characters back into a single string
Cast that string back to integer
Call "result = removedChar As Integer + function(remainingChars as Integer)" <--- this is the recursion
In the future you should at least make one attempt for others to help you edit when you post an obvious homework question ;)

Related

How can I determine if a character is alpha?

I have a list of phone numbers that sometimes have a person in parenthesis at the end. I need to extract the person's name (and add that as a note in a separate field). Here is an example of the data:
(517)234-6789(Bob)
701-556-2345
(325)663-5977
(215)789-8585
425-557-7745(Pauline)
There is always a () around the person's name, but often there is also a () around the area code, so I can't use the ( as a way to know a name has started. I'd like to create a loop that goes through the phone number string and if it sees alpha characters, builds a string that will be assigned to a variable as the name.
Something like this. I am making up the IS-ALPHA syntax, of course. That is what I am looking for, or something where I don't have to list every letter.
PROCEDURE CreatePhoneNote (INPUT cPhone AS CHARACTER)
DEFINE VARIABLE cPersonName AS CHARACTER NO-UNDO.
DEFINE VARIABLE cThisChar AS CHARACTER NO-UNDO.
DEFINE VARIABLE iCount AS INTEGER NO-UNDO.
DO iCount 1 TO LENGTH(cPhone):
cThisChar = SUBSTRING(cPhone,iCount,1).
IF IS_ALPHA(cThisChar) THEN cPersonName = cPersonName + cThisChar.
END.
//etc.....
END PROCEDURE.
Since these are the fun questions, just one more isAlpha answer that does not use hard-coded ASCII codes but leans on the property / assumption that an alpha character has an upper and lower case version:
function isAlpha returns logical (
i_cc as char
):
return compare( upper( i_cc ), '<>', lower( i_cc ), 'case-sensitive' ).
end function.
With some code to test the function:
// test
def var ic as int.
do ic = 0 to 255:
if isAlpha( chr(ic) ) then
message ic chr( ic ).
end.
And then you see that the hard-coded ASCII answer did not take characters with diacritics into account. :-)
Watch it run on ProgressAblDojo.
Watch it run again on ProgresAblDojo with a fix to help ProgressAblDojo over it's ignorance of it's own codepage.
i can not comment, but a suggestion is to see what character is at the 0th index of the string, if it is a ( then you know how to deal with that condition. Although the next method will only work for usa numbers (it does seem that is what you have), you can check if the length matches a set number (10 since there are 10 digits in a usa number, or 12 since that is how long it would be with 2 parenthesis), and if its not, you know you have a name at the end. You would then split that string at the appropriate index
You don't have to go through each character. You can use the open parenthesis to break up the string and get the data after the last parenthesis. This may run faster if you have a large amount of data.
DEFINE VARIABLE cPhone AS CHARACTER NO-UNDO INITIAL "(517)234-6789(Bob)".
DEFINE VARIABLE cPersonName AS CHARACTER NO-UNDO.
DEFINE VARIABLE iCount AS INTEGER NO-UNDO.
DEFINE VARIABLE iNum AS INTEGER NO-UNDO.
iCount = NUM-ENTRIES(cPhone, "("). /* See how many open parentheses there are */
cPersonName = ENTRY(iCount, cPhone, "("). /* Get the string after the last open paren */
iNum = INTEGER(SUBSTRING(cPersonName, 1, 1)) NO-ERROR. /* See if the first character is a number */
IF iNum > 0 THEN
cPersonName = "". /* If it's a number, there is no name so blank out the variable */
ELSE
cPersonName = SUBSTRING(cPersonName, 1, LENGTH(cPersonName) - 1). /* Drop the closed paren */
MESSAGE cPersonName VIEW-AS ALERT-BOX INFORMATION.
You can do this using the ABL's ASC() function.
if asc(cThisChar) ge 65
and asc(cThisChar) le 90
and asc(cThisChar) ge 97
and asc(cThisChar) le 122
then
cPersonName = cPersonName + cThisChar.
ASC() works simply enough for all codepages for codepoints between 0 and 255; for others it'll depend on your -cpinternal / session:cpinternal value.

Split string based on byte length in golang

The http request header has a 4k length limit.
I want to split the string which I want to include in the header based on this limit.
Should I use []byte(str) to split first then convert back to string using string([]byte) for each split part?
Is there any simpler way to do it?
In Go, a string is really just a sequence of bytes, and indexing a string produces bytes. So you could simply split your string into substrings by slicing it into 4kB substrings.
However, since UTF-8 characters can span multiple bytes, there is the chance that you will split in the middle of a character sequence. This isn't a problem if the split strings will always be joined together again in the same order at the other end before decoding, but if you try to decode each individually, you might end up with invalid leading or trailing byte sequences. If you want to guard against this, you could use the unicode/utf8 package to check that you are splitting on a valid leading byte, like this:
package httputil
import "unicode/utf8"
const maxLen = 4096
func SplitHeader(longString string) []string {
splits := []string{}
var l, r int
for l, r = 0, maxLen; r < len(longString); l, r = r, r+maxLen {
for !utf8.RuneStart(longString[r]) {
r--
}
splits = append(splits, longString[l:r])
}
splits = append(splits, longString[l:])
return splits
}
Slicing the string directly is more efficient than converting to []byte and back because, since a string is immutable and a []byte isn't, the data must be copied to new memory upon conversion, taking O(n) time (both ways!), whereas slicing a string simply returns a new string header backed by the same array as the original (taking constant time).

Hash collisions for golang built-in map and string keys?

I wrote this function to generate random unique id's for my test cases:
func uuid(t *testing.T) string {
uidCounterLock.Lock()
defer uidCounterLock.Unlock()
uidCounter++
//return "[" + t.Name() + "|" + strconv.FormatInt(uidCounter, 10) + "]"
return "[" + t.Name() + "|" + string(uidCounter) + "]"
}
var uidCounter int64 = 1
var uidCounterLock sync.Mutex
In order to test it, I generate a bunch of values from it in different goroutines, send them to the main thread, which puts the result in a map[string]int by doing map[v] = map[v] + 1. There is no concurrent access to this map, it's private to the main thread.
var seen = make(map[string]int)
for v := range ch {
seen[v] = seen[v] + 1
if count := seen[v]; count > 1 {
fmt.Printf("Generated the same uuid %d times: %#v\n", count, v)
}
}
When I just cast the uidCounter to a string, I get a ton of collisions on a single key. When I use strconv.FormatInt, I get no collisions at all.
When I say a ton, I mean I just got 1115919 collisions for the value [TestUuidIsUnique|�] out of 2227980 generated values, i.e. 50% of the values collide on the same key. The values are not equal. I do always get the same number of collisions for the same source code, so at least it's somewhat deterministic, i.e. probably not related to race conditions.
I'm not surprised integer overflow in a rune would be an issue, but I'm nowhere near 2^31, and that wouldn't explain why the map thinks 50% of the values have the same key. Also, I wouldn't expect a hash collision to impact correctness, just performance, since I can iterate over the keys in a map, so the values are stored there somewhere.
In the output, all runes printed are 0xEFBFBD. It's the same number of bits as the highest valid unicode code point, but that doesn't really match either.
Generated the same uuid 2 times: "[TestUuidIsUnique|�]"
Generated the same uuid 3 times: "[TestUuidIsUnique|�]"
Generated the same uuid 4 times: "[TestUuidIsUnique|�]"
Generated the same uuid 5 times: "[TestUuidIsUnique|�]"
...
Generated the same uuid 2047 times: "[TestUuidIsUnique|�]"
Generated the same uuid 2048 times: "[TestUuidIsUnique|�]"
Generated the same uuid 2049 times: "[TestUuidIsUnique|�]"
...
What's going on here? Did the go authors assume that hash(a) == hash(b) implies a == b for strings? Or am I just missing something silly? go test -race isn't complaining either.
I'm on macOS 10.13.2, and go version go1.9.2 darwin/amd64.
String conversion of an invalid rune returns a string containing the unicode replacement character: "�".
Use the strconv package to convert an integer to text.

What does this extra '+' represent in this code? Recursive function

Problem:
A digital root is the recursive sum of all the digits in a number. Given n, take the sum of the digits of n. If that value has two digits, continue reducing in this way until a single-digit number is produced. This is only applicable to the natural numbers.
example:
digital_root(16)
=> 1 + 6
=> 7
This is a function that was coded:
function digital_root(n) {
if (n < 10) {
return n;
}
return digital_root( n.toString().split('').reduce( function (a, b) {
return a + +b;
}, 0));
}
Can someone clarify what the extra + is doing in this line of code? return a + +b;
Its probably a sneaky way of converting a string to an integer. You don't say what language this is, but many dynamic languages allow variables to be any type without declaration and use + for both addition and string concatenation, with implicit conversions between strings and numbers. Such languages make it easy to accidentally get the wrong thing (concatenating when you intend to add or vice versa).
However, using a unary + is (usually) a numeric identity, which will convert its argument to a number (if it happens to be a string -- it does nothing if the argument is already a number). So then the binary + will be add rather than concatenate.

What's wrong with groovy math?

This seems quite bizarre to me and totally putting me on the side of people willing to use plain java. While writing a groovy based app I encountered such thing:
int filesDaily1 = (item.filesDaily ==~ /^[0-9]+$/) ?
Integer.parseInt(item.filesDaily) : item.filesDaily.substring(0, item.filesDaily.indexOf('.'))
def filesDaily = (item.filesDaily ==~ /^[0-9]+$/) ?
Integer.parseInt(item.filesDaily) : item.filesDaily.substring(0, item.filesDaily.indexOf('.'))
So, knowing that item.filesDaily is a String with value '1..*' how can it possibly be, that filesDaily1 is equal to 49 and filesDaily is equal to 1?
What's more is that when trying to do something like
int numOfExpectedEntries = filesDaily * item.daysToCheck
an exception is thrown saying that
Cannot cast object '111' with class 'java.lang.String' to class 'int'
pointing to that exact line of code with multiplication. How can that happen?
You're assigning this value to an int:
item.filesDaily.substring(0, item.filesDaily.indexOf('.'))
I'm guessing that Groovy is converting the single-character string "1" into the char '1' and then taking the Unicode value in the normal char-to-int conversion... so you end up with the value 49.
If you want to parse a string as a decimal number, use Integer.parseInt instead of a built-in conversion.
The difference between filesDaily1 and filesDaily here is that you've told Groovy that filesDaily1 is meant to be an int, so it's applying a conversion to int. I suspect that filesDaily is actually the string "1" in your test case.
I suspect you really just want to change the code to something like:
String text = (item.filesDaily ==~ /^[0-9]+$/) ? items.filesDaily :
item.filesDaily.substring(0, item.filesDaily.indexOf('.'))
Integer filesDaily = text.toInteger()
This is a bug in the groovy type conversion code.
int a = '1'
int b = '11'
return different results because different converters are used. In the example a will be 49 while b will be 11. Why?
The single-character-to-int conversion (using String.charAt(0)) has a higher precedence than the integer parser.
The bad news is that this happens for all single character strings. You can even do int a = 'A' which gives you 65.
As long as you have no way of knowing how long the string is, you must use Integer.parseInt() instead of relying on the automatic type conversion.

Resources