Recursive macro makes infinite recursion - recursion

I made a simple macro that returns the taken parameter.
macro_rules! n {
($n:expr) => {{
let val: usize = $n;
match val {
0 => 0,
_ => n!(val - 1),
}
}};
}
When I compile this code with the option external-macro-backtrace, it raises an error:
error: recursion limit reached while expanding the macro `n`
--> src/main.rs:15:18
|
10 | macro_rules! n {
| _-
| |_|
| |
11 | | ($n:expr) => {{
12 | | let val: usize = $n;
13 | | match val {
14 | | 0 => 0,
15 | | _ => n!(val - 1),
| | ^^^^^^^^^^^
| | |
| | in this macro invocation
16 | | }
17 | | }};
18 | | }
| | -
| |_|
| |_in this expansion of `n!`
| in this expansion of `n!`
...
31 | | n!(1);
| | ------ in this macro invocation
|
= help: consider adding a `#![recursion_limit="128"]` attribute to your crate
I changed the recursion_limit to 128 and higher, but the compiler error message just increase as well. Even when I call n!(0) it makes the same error. I think it is infinite recursion, but I can't find the reason.

Well, it really is an infinite recursion. Check what your macro invocation n!(0) will be expanded into:
{
let val: usize = 0;
match val {
0 => 0,
_ => n!(0 - 1),
}
}
...and since there's no way for argument of n! to stop growing negative, it'll repeat (with n!(0 - 1 - 1) in the second match arm, then n!(0 - 1 - 1 - 1) etc.) infinitely.
The key point here is that the macro expansion happens in compile-time, while the match statement you're trying to use to limit the recursion is invoked only at run-time and can't stop anything from appear before that. Unhappily, there's no easy way to do this, since Rust won't evaluate macro arguments (even if it's a constant expression), and so just adding the (0) => {0} branch to the macro won't work, since the macro will be invoked as (for example) n!(1 - 1).

Related

Rust expected type found struct

I've got the following code:
use actix_service::Service;
use actix_web::{web, App, HttpServer, Responder};
use actix_router::{Path, Url};
use actix_web::dev::{ServiceRequest, ServiceResponse};
use actix_web::error::ResponseError;
use actix_web::{http, http::StatusCode, Error, HttpRequest, HttpResponse};
async fn greet(req: HttpRequest) -> impl Responder {
let name = req.match_info().get("name").unwrap_or("World");
format!("Hello {}!", &name)
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
let app = App::new()
.wrap_fn(|req, srv| {
let passed: bool;
// change to false to simulate a failed check
let check = true;
if *&req.path().contains("/item/") {
passed = check;
} else {
passed = true;
}
let fresh_result = match passed {
true => {
let fut = srv.call(req);
Box::pin(async {
let result = fut.await?;
Ok(result)
})
}
false => Box::pin(async {
let result = req.into_response(
HttpResponse::Found()
.header(http::header::LOCATION, "/login")
.finish()
.into_body(),
);
Ok(result)
}),
};
async {
let last_outcome = fresh_result.await?;
Ok(last_outcome)
}
})
.route("/", web::get().to(greet));
return app;
})
.bind("127.0.0.1:8000")?
.run()
.await
}
However, I get the following error:
110 | let fresh_result = match passed {
| ________________________________________-
111 | | true => {
112 | | let fut = srv.call(req);
113 | | Box::pin(
| _|_____________________________-
114 | | | async {
115 | | | let result = fut.await?;
116 | | | Ok(result)
117 | | | }
118 | | | )
| |_|_____________________________- this is found to be of type `std::pin::Pin<std::boxed::Box<impl core::future::future::Future>>`
... |
121 | / | Box::pin(
122 | | | async {
123 | | | let result = req.into_response(
124 | | | HttpResponse::Found()
... | |
129 | | | }
130 | | | )
| |_|_____________________________^ expected generator, found a different generator
131 | | }
132 | | };
| |_____________________- `match` arms have incompatible types
|
::: /Users/maxwellflitton/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/src/libcore/future/mod.rs:55:43
|
55 | pub const fn from_generator<T>(gen: T) -> impl Future<Output = T::Return>
| ------------------------------- the found opaque type
|
= note: expected type `std::pin::Pin<std::boxed::Box<impl core::future::future::Future>>` (generator)
found struct `std::pin::Pin<std::boxed::Box<impl core::future::future::Future>>` (generator)
I am completely stuck on this. I don't know how to ensure it's a type. If there is no match statement and there is just one async block then it all runs fine. This is for middleware in an Actix-web server. I am trying to redirect the viewer if credentials are not working.
You use Box::pin to create the boxed futures,
but Rust still thinks you want the impl Future<...> version (meaning the future is boxed to the heap, but doesn't use dynamic dispatch).
This is why the type in the error is Pin<Box<impl Future<...>>>, and because any two impl Future<...>s are of different types, you get the expected/found error.
You need to tell Rust you are interested in dynamic dispatch (because you have two different futures that can be stored in the Box, and which one really is stored there can only be known at runtime), for example by explicitly specifying the return type like this:
use std::pin::Pin;
use std::future::Future;
let fresh_result: Pin<Box<dyn Future<Output=_>>> = match passed {
// ...
}
Now you will get a new error about cannot infer type for type parameter E which can be resolved by specifying the Output of the future as well,
so the new line should be:
let fresh_result: Pin<Box<dyn Future<Output=Result<ServiceResponse, Error>>>> = match passed {
// ...
}
which will compile successfully!

What would the conversion of this from EBNF to BNF be? Also, what is the leftmost derivation?

I need to convert this from EBNF to BNF.
<statement> ::= <ident> = <expr>
<statement> ::= IF <expr> THEN <statement> [ ELSE <statement> ] END
<statement> ::= WHILE <expr> DO <statement> END
<statement> ::= BEGIN <statement> {; <statement>} END
Also, I'm stuck on this one:
E -> E+T | E-T | T
T -> T*F | T/F | F
F -> (E) | VAR | INT
VAR -> a | b | c
INT -> 0 | 1 | 2| 3 | 4| 5 | 6 | 7 | 8 | 9
After modifying the grammer to add a ^ operator, What is the leftmost derivation that your grammar assigns to the expression a^2^b*(c+1)? You may find it convenient to sketch the parse tree for this expression first, and then figure out the leftmost derivation from that.
I added G -> F^G | G and then got G 2 G b E as my answer but am not sure if that is correct.

teensy pointer on input string

Currently a student in college, decided to jump ahead of my programming class and have a little fun with pointers. This is supposed to take a specific serial input and change the state of three LED's I have attached to the Teensy++2.0. However it seems to be just giving me back the first input.
http://arduino.cc/en/Serial/ReadBytesUntil
This is my reference for the ReadBytesUntil() The input goes #,#,### (1,1,255 being an example)
I guess basically my question is, does ReadBytesUntil() deal with commas? And if so, whats going on here?
EDIT -- I asked my teacher and even he has no clue why it doesn't work.
char *dataFinder(char *str){
while (*str != ','){
str++;
}
str++;
return str;
}
void inputDecoder(){
str = incomingText;
whichLED = *str;
dataFinder(str);
onoff = *str;
dataFinder(str);
powerLevel = *str;
}
void loop(){
int length;
if (Serial.available() > 0 ){ //this is basically: if something is typed in, do something.
length = Serial.readBytesUntil(13,incomingText, 10); //reads what is typed in, and stores it in incomingVar
incomingText[length]=0; ///swapping out cr with null
inputDecoder();
//ledControl();
Serial.print("Entered:");
//incomingText[9]=0;
Serial.println(incomingText); //Here for testing, to show what values I'm getting back.
Serial.println(whichLED);
Serial.println(onoff);
Serial.println(powerLevel);
}
delay(1000);
}
The str in inputDecoder() is from the global scope and is not the same str in dataFinder(), which has local scope.
Imagine this ASCII picture is the layout of memory:
str
+-----+-----+-----+-----+ +-----+-----+-----+-----+-----+-----+-----+-----+
| * | | | | ... | 1 | , | 1 | , | 2 | 5 | 5 | \n |
+--|--+-----+-----+-----+ +-----+-----+-----+-----+-----+-----+-----+-----+
|
|
\-----------------------------^
When you pass str to dataFinder() it creates a copy of the pointer, which I'll call str'
str str'
+-----+-----+-----+-----+ +-----+-----+-----+-----+-----+-----+-----+-----+
| * | | * | | ... | 1 | , | 1 | , | 2 | 5 | 5 | \n |
+--|--+-----+--|--+-----+ +-----+-----+-----+-----+-----+-----+-----+-----+
| \-----------------^
|
\-----------------------------^
When dataFinder() increments str it is really altering str'
str str'
+-----+-----+-----+-----+ +-----+-----+-----+-----+-----+-----+-----+-----+
| * | | * | | ... | 1 | , | 1 | , | 2 | 5 | 5 | \n |
+--|--+-----+--|--+-----+ +-----+-----+-----+-----+-----+-----+-----+-----+
| \-----------------------------^
|
\-----------------------------^
Then, when you return to inputDecoder() you dereference str which is still pointing at the start of the string.
You can either assign the value of str' back to the global str using:
str = dataFinder(str);
or change dataFinder() so it does not take an argument, therefore not copying the variable.

Z80 DAA instruction

Apologies for this seemingly minor question, but I can't seem to find the answer anywhere - I'm just coming up to implementing the DAA instruction in my Z80 emulator, and I noticed in the Zilog manual that it is for the purposes of adjusting the accumulator for binary coded decimal arithmetic. It says the instruction is intended to be run right after an addition or subtraction instruction.
My questions are:
what happens if it is run after another instruction?
how does it know what instruction preceeded it?
I realise there is the N flag - but this surely wouldnt definitively indicate that the previous instruction was an addition or subtraction instruction?
Does it just modify the accumulator anyway, based on the conditions set out in the DAA table, regardless of the previous instruction?
Does it just modify the accumulator anyway, based on the conditions set out in the DAA table, regardless of the previous instruction?
Yes. The documentation is only telling you what DAA is intended to be used for. Perhaps you are referring to the table at this link:
--------------------------------------------------------------------------------
| | C Flag | HEX value in | H Flag | HEX value in | Number | C flag|
| Operation | Before | upper digit | Before | lower digit | added | After |
| | DAA | (bit 7-4) | DAA | (bit 3-0) | to byte | DAA |
|------------------------------------------------------------------------------|
| | 0 | 0-9 | 0 | 0-9 | 00 | 0 |
| ADD | 0 | 0-8 | 0 | A-F | 06 | 0 |
| | 0 | 0-9 | 1 | 0-3 | 06 | 0 |
| ADC | 0 | A-F | 0 | 0-9 | 60 | 1 |
| | 0 | 9-F | 0 | A-F | 66 | 1 |
| INC | 0 | A-F | 1 | 0-3 | 66 | 1 |
| | 1 | 0-2 | 0 | 0-9 | 60 | 1 |
| | 1 | 0-2 | 0 | A-F | 66 | 1 |
| | 1 | 0-3 | 1 | 0-3 | 66 | 1 |
|------------------------------------------------------------------------------|
| SUB | 0 | 0-9 | 0 | 0-9 | 00 | 0 |
| SBC | 0 | 0-8 | 1 | 6-F | FA | 0 |
| DEC | 1 | 7-F | 0 | 0-9 | A0 | 1 |
| NEG | 1 | 6-F | 1 | 6-F | 9A | 1 |
|------------------------------------------------------------------------------|
I must say, I've never seen a dafter instruction spec. If you examine the table carefully, you will see that the effect of the instruction depends only on the C and H flags and the value in the accumulator -- it doesn't depend on the previous instruction at all. Also, it doesn't divulge what happens if, for example, C=0, H=1, and the lower digit in the accumulator is 4 or 5. So you will have to execute a NOP in such cases, or generate an error message, or something.
Just wanted to add that the N flag is what they mean when they talk about the previous operation. Additions set N = 0, subtractions set N = 1. Thus the contents of the A register and the C, H and N flags determine the result.
The instruction is intended to support BCD arithmetic but has other uses. Consider this code:
and 15
add a,90h
daa
adc a,40h
daa
It ends converting the lower 4 bits of A register into the ASCII values '0', '1', ... '9', 'A', 'B', ..., 'F'. In other words, a binary to hexadecimal converter.
I found this instruction rather confusing as well, but I found this description of its behavior from z80-heaven to be most helpful.
When this instruction is executed, the A register is BCD corrected using the contents of the flags. The exact process is the following: if the least significant four bits of A contain a non-BCD digit (i. e. it is greater than 9) or the H flag is set, then $06 is added to the register. Then the four most significant bits are checked. If this more significant digit also happens to be greater than 9 or the C flag is set, then $60 is added.
This provides a simple pattern for the instruction:
if the lower 4 bits form a number greater than 9 or H is set, add $06 to the accumulator
if the upper 4 bits form a number greater than 9 or C is set, add $60 to the accumulator
Also, while DAA is intended to be run after an addition or subtraction, it can be run at any time.
This is code in production, implementing DAA correctly and passes the zexall/zexdoc/z80test Z80 opcode test suits.
Based on The Undocumented Z80 Documented, pag 17-18.
void daa()
{
int t;
t=0;
// 4 T states
T(4);
if(flags.H || ((A & 0xF) > 9) )
t++;
if(flags.C || (A > 0x99) )
{
t += 2;
flags.C = 1;
}
// builds final H flag
if (flags.N && !flags.H)
flags.H=0;
else
{
if (flags.N && flags.H)
flags.H = (((A & 0x0F)) < 6);
else
flags.H = ((A & 0x0F) >= 0x0A);
}
switch(t)
{
case 1:
A += (flags.N)?0xFA:0x06; // -6:6
break;
case 2:
A += (flags.N)?0xA0:0x60; // -0x60:0x60
break;
case 3:
A += (flags.N)?0x9A:0x66; // -0x66:0x66
break;
}
flags.S = (A & BIT_7);
flags.Z = !A;
flags.P = parity(A);
flags.X = A & BIT_5;
flags.Y = A & BIT_3;
}
For visualising the DAA interactions, for debugging purposes, I have written a small Z80 assembly program, that can be run in an actual ZX Spectrum or in an emulation that emulates accurately DAA: https://github.com/ruyrybeyro/daatable
As how it behaves, got a table of flags N,C,H and register A before and after DAA produced with the aforementioned assembly program: https://github.com/ruyrybeyro/daatable/blob/master/daaoutput.txt

Find out the language generated, given a context-free grammar?

Should I manually apply the production rules to find out the language generated by this grammar? This is tedious, is there any trick/tip to speed up things?
G = {{S, B}, {a, b}, P, S}
P = {S -> aSa | aBa, B -> bB | b}
EDIT: I found Matajon's answer a good one, that is thinking about each language generated by non-terminal symbol and then combine them.
But I'm still stuck when I have to solve some complicated examples like this:
G = {{S, R, T}, {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, P, S}
P = {S -> A | AS | BR | CT,
R -> AR | BT | C | CS,
T -> AT | B | BS | CR,
A -> 0 | 3 | 6 | 9,
B -> 1 | 4 | 7,
C -> 2 | 5 | 8}
Crazy, isn't it? Taken from past exams (programming languages course).
I don't know any general trick, but usually it helps to think about the language generated from each non-terminal.
In your example language generated from B is obviously L(B) = {b}^+. Then you think about S rules, using the first rule, you can generate sentencial forms {a^n.S.a^n | n >= 1}. If you use second rule on these sentencial forms or on S alone you can generate sentencial forms {a^n.B.a^n | n >= 1}.
Rest is pretty easy, you combine these two things and get L(G) = {a^n.b^+.a^n | n >= 1}
By the way, in the definition of grammar terminals and nonterminals are sets, not tuples. And third component is production rules, not start symbol. So you should write G = {{S, B}, {a, b}, P, S}.
Edit
Actually, there is a way to solve your second example without much thinking just by following something like a cookbook. Because, language generated by your second context-free grammar is in fact regular.
When you substitute rules for A, B and C to first three rules, you get
P' = {S -> 0 | 3 | 6 | 9 | 0S | 3S | 6S | 9S | 1R | 4R | 7R | 2T | 5T | 8T
R -> 0R | 3R | 6R | 9R | 1T | 4T | 7T | 2 | 5 | 8 | 2S | 5S | 8S
T -> 0T | 3T | 6T | 9T | 1 | 4 | 7 | 1S | 4S | 7S | 2R | 5R | 8R}
And P' is regular grammar. Because of that, you can convert it to nondeterministic finite automaton (there is really simple way, look for it) and then convert resulting NFA to the regular expression (this is not so simple but if you follow an algorithm and don't get lost, you should be ok). And it from regular expression it is easy to tell what language it describes.
Also, once you have NFA for this language you can look at it and determine what it does logically (it has something to do with counts of 1,4,7 and 2,5,8 in the word and mod 3 of their difference. Think it through, it is your homework, afterall :-) )
Of course, if you don't context-free grammar generating regular language you can't use this trick. There is no general way to tell what language the grammar generates (language equality problem for CFG's is undecideable), you have to think about every single example and look for similarities and patterns in it's logical structure.
I think you'll just need to apply the production rules.

Resources