New to posting here on stackoverflow, so please forgive any transgresses that occur.
So a little background....
My grandfather is a current computer science professor at a university. I have always taken a great interest in computers, and have really grown into dealing with the hardware side of things. However, him being how he is, he wants to me to have a broader understanding of computing in general. Including coding/programming.
SO to my question.... He has given me a Key P5FW-93F6. He told me, that if I am able to make other keys "with the same value" he will give me a reward. So as I am trying to solve this problem, I haven't a clue where to start. In the beginning, I have entered the code, and followed the pattern of Letter, Number, Letter, Letter, etc. into excel and used the random value function.. However, none of these keys work in his program. He told me there is a massive amount of different "Keys" that will work but will not provide hints on to how to solve the problem. What language should I learn to solve this? Should I be looking for a hash value to be the same as the one key listed above? I am completely lost... any help would be appreciated!
Thanks!!
P.S. I do have an unlimited number of attempts, however only have one line that I can enter at a time. So I can't make batch entries.
Related
Please excuse my terminology as I am still getting my head wrapped around everything. I am trying to put together my first parser and have trying to find as many examples as I can to in trying to build my grammar. I've seen many situations where a non-termianal gets multiplie productions
<F>::=(<E>)
<F>::=id
Which is the same as writing
<F>::id|(<E>)
From everything I've read, this is perfectly fine. What I am looking to do is the following
<atsign>::=#
<expl>::=!
<special>::=!|#|#|$|%|^|&|*|(|)|+
Is there anything I need to be mindful of? Is my ordering correct for an LR parser? This isn't exactly homework as I am not in school at the moment, but it can be treated as such since I know this is a course I'll be taking in the future.
I am adding a feature to an application in which the students answer questions that are more descriptive in nature. I am curious to know if there's a way to make the system "smart" enough to grade these answers. Ofcourse, I can run the answers through a set of keywords to ensure that the student has atleast included the keywords in the answers, but obviously this is not smart enough.
I know there's no fool proof way of grading descriptive answers, but was wondering if there's any technologies out there that I can look into.
You could use mechanical turk which is an API for humans. Which is probably as far as you can get with AI'ing your system. Understanding and grading actual text is one of the last remaining problems where humans are way better than computers (i.e. computers suck)
One notable exception is Watson which is actually really good at Jeopardy, but it runs on a huge computing cluster and includes some serious optimizations and smarts. That's nothing you just turn on. Sorry...
The answer is not so simple. There are "automated grading systems" out there, used, I believe, for example, to grade GRE exams. For example, see this paper and this by ETS.
I have the responsibility of ensuring that a colleague who is just learning R knows the basics before a course where that is a requirement. The colleague has gone through a couple of tutorials so hopefully she is ok, but I would like to give her a test to gauge it.
I was therefore wondering if anyone knew if there were any materials on the web that would be suitable? and is possible had both questions and answers.
PS Cross-posted to r-help#stat.math.ethz.ch
There is a whole set of exercises with solutions from the book Data Analysis and Graphics Using R. (Maindonald & Braun, 2nd edn, CUP 2007) available online : http://maths.anu.edu.au/~johnm/r-book/2edn/exercises/
Next to that, a quick search using the obscure randomized pagecollector Google brought to me a set of exercises where you can pick out whatever you want. Try the magic phrase "R exercises". ;)
Some I found interesting :
http://www2.imperial.ac.uk/~das01/RCourse/Exercises.pdf (very nice)
http://dial.liacs.nl/Courses/MicroArrayDataAnalysis/Exercises/Introduction_to_R_Exercises_Nov_2004.pdf (rather basic)
http://www.shlrc.mq.edu.au/masters/students/raltwarg/altwargslp802.html (rather basic)
If this is just a one time single person evaluation then a oral style exam is probably going to tell you a lot more than a set of fixed problems. Get a data set and have her read it into R, do some basic data manipulation, a couple of plots, and a standard analysis or two. Based on what she does well or has a challenge with you can modify the direction that you have things go and what additional questions you ask.
My teammates and I have a very challenging new project to do, and we are supposed to submit it next week. We don't have a single clue about how to do it, and really need help. We are undergraduate students, new to Information Retrieval and AI, and really need your ideas.
The project is roughly:
When an expert is cited in a document,
find an expert with an opposing
opinion & find out what he/she says
about that topic.
We are free to use any programming language, but we are not concerned with the programming. We would like help to get us started. Please give us a rough idea on how to design such a system and how to retrieve information on the internet. How should we get his opinion, then find an opposite opinion?
Simple: use Amazon's Mechanical Turk.
Without that (or an equivalent) you're in trouble. If there are no further constraints on the problem then you will need a full-blown AI, the kind that doesn't yet exist. If there are severe restraints then you might have a chance of doing this in a week. If the expert can be in any field (medicine, politics, history, fashion, science, comic books, etc.) then there will be no single, well-organized repository of essays. You'll have to use Google to find Dr. X's opinion. Once you find Dr. X's writing (and let's pray it's text, not audio) you'll have to do some kind of natural language processing to get the thrust of it, even if you're lucky enough to find a descriptive title ("Digital Photography Is Absolutely Great"). Then you have to figure out it's opposite. What's the opposite of "Neil Gaiman draws on folklore for his story ideas"? Figuring out what opinion you're looking for will be a serious problem. After that, things actually get easier: you can google for the subject and use the same magic tools to find the one you're looking for.
So what do have a chance of solving? A search for opinions that someone else has already organised into "pro" and "con". Some online political forums are organised that way. Wikipedia cites opposing views in a special section in some of its articles. Science journals print letters of rebuttal. Look around, you might find a site even more cut-and-dried. Choose a small enough arena and you'll have a tractible problem.
EDIT: Damn, Ben Dunlap beat me to all my major points in a comment. Sigh
Sounds like an NLP problem to me. As for the information about documents and cites, http://citeseerx.ist.psu.edu should be a good starting point.
For each paper, there are several citations which refer to the paper. At the very minimum, you have to scan the abstract of the paper and that of the citations and run your own algorithm to figure if any citation is of the opposing opinion. Maybe your professor can give you hints on some approximate heuristic, but as far as I know it is a really hard problem.
I would be watching this thread for more interesting approaches.
Automatically submit a Google search request similar to "expert_name sucks", "expert_name wrong", or something like that. Find the first result that has "PhD" with a document link in the same sentence and return the link.
I think you might be blowing this up a little too big... as an undergraduate project, I would approach it a little more small scale.
Unless your specification says you must use actual internet resources, you would be better off creating your own database of custom short documents. Add metadata to each document stating the points they make about certain topics.
Next, I would create a list of citations which link to each document and add some metadata representing that experts stance on the topic. When someone reads a document, I would augment the list of citations with lists of links to documents which have alternative views on that topic.
Basically it would consist of these tables:
Document (id, data)
DocumentPoints (documentId, topic, stance)
Citation (documentId, topic, stance)
And when someone loads up a document, the citations are pulled up as well. For each citation, you search DocumentPoints for the same topics with different stances. The most difficult part of this project would be creating the 5 or 6 documents you need to have data in your database. After that the solution is trivial.
On a side note, most of these other answers are telling you to use some existing solution... don't do that unless the assignment tells you to. You'll be much better off understanding the problem and various ways to solve it (this is definitely not the only/best one) if you work through the entire problem yourself. When the teacher asks you to do something not supported by whatever product you chose to implement your solution on, you wouldn't be able to fix it. If you had just written it yourself, you could just as easily implement to the new spec as well.
I'm writing some children's Math Education software for a class.
I'm going to try and present problems to students of varying skill level with randomly generated math problems of different types in fun ways.
One of the frustrations of using computer based math software is its rigidity. If anyone has taken an online Math class, you'll know all about the frustration of taking an online quiz and having your correct answer thrown out because your problem isn't exactly formatted in their form or some weird spacing issue.
So, originally I thought, "I know! I'll use an expression parser on the answer box so I'll be able to evaluate anything they enter and even if it isn't in the same form I'll be able to check if it is the same answer." So I fire up my IDE and start implementing the Shunting Yard Algorithm.
This would solve the problem of it not taking fractions in the smallest form and other issues.
However, It then hit me that a tricky student would simply be able to enter most of the problems into the answer box and my expression parser would dutifully parse and evaluate it to the correct answer!
So, should I not be using an expression parser in this instance? Do I really have to generate a single form of the answer and do a string comparison?
One possible solution is to note how many steps your expression evaluator takes to evaluate the problem's original expression, and to compare this to the optimal answer. If there's too much difference, then the problem hasn't been reduced enough and you can suggest that the student keep going.
Don't be surprised if students come up with better answers than your own definition of "optimal", though! I was a TA/grader for several classes, and the brightest students routinely had answers on their problem sets that were superior to the ones provided by the professor.
For simple problems where you're looking for an exact answer, then removing whitespace and doing a string compare is reasonable.
For more advanced problems, you might do the Shunting Yard Algorithm (or similar) but perhaps parametrize it so you could turn on/off reductions to guard against the tricky student. You'll notice that "simple" answers can still use the parser, but you would disable all reductions.
For example, on a division question, you'd disable the "/" reduction.
This is a great question.
If you are writing an expression system and an evaluation/transformation/equivalence engine (isn't there one available somewhere? I am almost 100% sure that there is an open source one somewhere), then it's more of an education/algebra problem: is the student's answer algebraically closer to the original expression or to the expected expression.
I'm not sure how to answer that, but just an idea (not necessarily practical): perhaps your evaluation engine can count transformation steps to equivalence. If the answer takes less steps to the expected than it did to the original, it might be ok. If it's too close to the original, it's not.
You could use an expression parser, but apply restrictions on the complexity of the expressions permitted in the answer.
For example, if the goal is to reduce (4/5)*(1/2) and you want to allow either (2/5) or (4/10), then you could restrict the set of allowable answers to expressions whose trees take the form (x/y) and which also evaluate to the correct number. Perhaps you would also allow "0.4", i.e. expressions of the form (x) which evaluate to the correct number.
This is exactly what you would (implicitly) be doing if you graded the problem manually -- you would be looking for an answer that is correct but which also falls into an acceptable class.
The usual way of doing this in mathematics assessment software is to allow the question setter to specify expressions/strings that are not allowed in a correct answer.
If you happen to be interested in existing software, there's the open-source Stack http://www.stack.bham.ac.uk/ (or various commercial options such as MapleTA). I suspect most of the problems that you'll come across have also been encountered by Stack so even if you don't want to use it, it might be educational to look at how it approaches things.