Calling a REST API in R - r

I recently discovered the dataforseo api and tryed to call it via R
library(httr)
username <- 'mygmailadress#gmail.com'
password <- 'mypassword'
dataforseo_api <- POST('https://api.dataforseo.com/v2/op_tasks_post/$data',
authenticate(username,password),
body = list(grant_type = 'client_credentials'),
type = "basic",
verbose()
)
This is the message I have received:
<- HTTP/1.1 401 Unauthorized
<- Server: nginx/1.14.0 (Ubuntu)
<- Date: Sun, 08 Jul 2018 13:31:34 GMT
<- Content-Type: application/json
<- Transfer-Encoding: chunked
<- Connection: keep-alive
<- WWW-Authenticate: Basic realm="Rest Server"
<- Cache-Control: no-cache, must-revalidate
<- Expires: 0
<- Access-Control-Allow-Origin: *
<- Access-Control-Allow-Methods: POST, GET, OPTIONS
<- Access-Control-Allow-Headers: Content-Type, Access-Control-Allow-Headers, Authorization, X-Requested-With
Do you know where my issue should come? Can you please help?

It looks like you're improperly configuring config. I don't see a config= in your code. The body is also not encoded correctly.
Also, in the API documentation I don't see anything about grant_type. It looks like an array of tasks should go there, e.g. something like:
{882394209: {'site': 'ranksonic.com', 'crawl_max_pages': 10}}
Response:
{'results_count': 1, 'results_time': '0.0629 sec.', 'results': {'2308949': {'post_id': 2308949, 'post_site': 'ranksonic.com',
'task_id': 882394209, 'status': 'ok'}}, 'status': 'ok'}
OK, so first off we need set_config or config=:
username <- 'Hack-R#stackoverflow.com' # fake email
password <- 'vxnyM9s7FAKESeIO' # fake password
set_config(authenticate(username,password), override = TRUE)
GET("https://api.dataforseo.com/v2/cmn_se")
Response [https://api.dataforseo.com/v2/cmn_se]
Date: 2018-07-08 16:20
Status: 200
Content-Type: application/json
Size: 551 kB
{
"status": "ok",
"results_time": "0.0564 sec.",
"results_count": 2187,
"results": [
{
"se_id": 37,
"se_name": "google.com.af",
"se_country_iso_code": "AF",
"se_country_name": "Afghanistan",
...
GET("https://api.dataforseo.com/v2/cmn_se/$country_iso_code")
Response [https://api.dataforseo.com/v2/cmn_se/$country_iso_code]
Date: 2018-07-08 15:48
Status: 200
Content-Type: application/json
Size: 100 B
{
"status": "ok",
"results_time": "0.0375 sec.",
"results_count": 0,
"results": []
GET("https://api.dataforseo.com/v2/cmn_se/$op_tasks_post")
Response [https://api.dataforseo.com/v2/cmn_se/$op_tasks_post]
Date: 2018-07-08 16:10
Status: 200
Content-Type: application/json
Size: 100 B
{
"status": "ok",
"results_time": "0.0475 sec.",
"results_count": 0,
"results": []
That was one thing. Also to POST data they need you to specify it as json, e.g. encode = "json". From their docs:
All POST data should be sent in the JSON format (UTF-8 encoding). The
keywords are sent by POST method passing tasks array. The data should
be specified in the data field of this POST array. We recommend to
send up to 100 tasks at a time.
Further:
The task setting is done using POST method when array of tasks is sent to
the data field. Each of the array elements has the following
structure:
then it goes on to list 2 required fields and many optional ones.
Note also that you can use reset_config() after as a better practice. If you're going to be running this a lot, sharing it, or using more than 1 computer I would also suggest to put your credentials in environment variables instead of your script for security and ease.
Another final word of advice is that you may want to just leverage their published Python client library and large compilation of examples. Since every new API request is something you'll be pioneering in R without their support, it may pay off to just do the data collection in Python.
This is an interesting API. If you get over to the Open Data Stack Exchange you should consider sharing it with that community.

Related

how to get data from the WTO API in R

library(httr)
library(jsonlite)
headers = c(
# Request headers
'Ocp-Apim-Subscription-Key' = '{subscription key}'
)
params = list()
# Request parameters
params['countries[]'] = '{array}'
resp <- GET(paste0("https://api.wto.org/tfad/transparency/procedures_contacts_single_window?"
, paste0(names(params),'=',params,collapse = "&")),
add_headers(headers))
if(!http_error(resp)){
jsonRespText<-fromJSON(rawToChar(content(resp,encoding = 'UTF-8')))$Dataset
jsonRespText
}else{
stop('Error in Response')
}
I don't know how to get response from an API in R. I have executed this code but the server is not responding...
If you examine the value of the resp object after running your code you'll notice a status code:
> resp
Response [https://tfadatabase.org/api/transparency/procedures_contacts_single_window?countries[]=%7Barray%7D]
Date: 2020-04-17 19:25
Status: 422
Content-Type: application/json
Size: 77 B
So the server actually did respond, it just didn't give you what you were hoping for. In the API documentation we can look up this code:
422 Unprocessable Entity
If a member cannot be found, or the request parameters are poorly
formed.
So I just went to the Query Builder and looked for a valid request URL and updated the code. It ran fine - i.e. Status 200.
This was the URL I used in the code:
https://api.wto.org/timeseries/v1/data?i=TP_A_0100&r=000&fmt=json&mode=full&lang=1&meta=false
and the value of resp was
Date: 2020-04-17 19:30
Status: 200
Content-Type: application/json; charset=utf-8
Size: 88 B
I cut out the subscription key in my results above. You can find the Query Builder here. Incidentally, in the Query Builder it automatically includes the subscription key and other "header" info in the URL. You can either remove that first and re-add it in your code, or just change your code to run GET() directly on their version of the URL.

How to send POST to MongoDB Atlas using HTTPie?

I'm attempting to test my REST API by making a POST request to a MongoDB cloud Atlas DB server. I know Postman is available, but I wanted to use something different like Httpie. I already checked this question but I'm still stuck.
How to send a POST request using HTTPie?
I'm trying to get text='john smith'
when I use
`http -f POST :5000/api/posts text='john smith'`
I get this response.
`HTTP/1.1 201 Created
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 0
Date: Tue, 19 Feb 2019 20:33:36 GMT
X-Powered-By: Express`
But when I use...
http -f GET :5000/api/posts
I get back ...
`[
{
"_id": "5c6c6820c2f6eb15ea9e8e08",
"createdAt": "2019-02-19T20:33:36.468Z",
"text": null
}
]`
This is my Nodejs API for the post
router.post('/', async(req, res) => {
const posts = await loadPostCollection();
await posts.insertOne({
text: req.body.text,
createdAt: new Date()
});
res.status(201).send();
});

ArangoDB can't send request with curl

I can't unserstand what I am doing wrong, but when I am sending next request with curl, I am getting error:
echo {"id":1,"question":"aaa"},{"id":2,"question":"bbb?"} | curl -X POST --data-binary #- --dump - http://localhost:8529/_db/otest/_api/document/?collection=sitetestanswers
HTTP/1.1 100 (Continue)
HTTP/1.1 400 Bad Request
Server: ArangoDB
Connection: Keep-Alive
Content-Type: application/json; charset=utf-8
Content-Length: 100
{"error":true,"errorMessage":"failed to parse json object: expecting EOF","code":400,"errorNum":600}
Any ideas? I tied wrap it's to [...]. Nothing do not help.
With [...] validator mark this as valid
Same with D. Here is my code:
void sendQuestionsToArangoDB(Json questions)
{
string collectionUrl = "http://localhost:8529/_db/otest/_api/document/?collection=sitetestanswers";
auto rq = Request();
rq.verbosity = 2;
string s = `{"id":"1","question":"foo?"},{"id":2}`;
auto rs = rq.post(collectionUrl, s, "application/json");
writeln("SENDED");
}
--
POST /_db/otest/_api/document/?collection=sitetestanswers HTTP/1.1
Content-Length: 37
Connection: Close
Host: localhost:8529
Content-Type: application/json
HTTP/1.1 400 Bad Request
Server: ArangoDB
Connection: Close
Content-Type: application/json; charset=utf-8
Content-Length: 100
100 bytes of body received
For D I use this lib: https://github.com/ikod/dlang-requests
Same issue with vibed.
ArangoDB do not understand JSON if it's come ass array like [...]. It should be passed as key-value. So if you need pass array it should have key mykey : [].
Here is working code:
import std.stdio;
import requests.http;
void main(string[] args)
{
string collectionUrl = "http://localhost:8529/_db/otest/_api/document?collection=sitetestanswers";
auto rq = Request();
rq.verbosity = 2;
string s = `{"some_data":[{"id":1, "question":"aaa"},{"id":2, "question":"bbb"}]}`;
auto rs = rq.post(collectionUrl, s, "application/json");
writeln("SENDED");
}
otest - DB name
sitetestanswers - collection name (should be created in DB)
echo '[{"id":1,"question":"aaa"},{"id":2,"question":"bbb?"}]'
should do the trick. You need to put ticks around the JSON. The array brackets are necessary otherwise this is not valid JSON.
You are trying to send multiple documents. The data in the original question separates the documents by comma ({"id":1,"question":"aaa"},{"id":2,"question":"bbb?"}) which is invalid JSON. Thus the failed to parse json object answer from ArangoDB.
Putting the documents into angular brackets ([ ... ]) as some of the commentors suggested will make the request payload valid JSON again.
However, you're sending the data to a server endpoint that handles a single document. The API for POST /_api/document/?collection=... currently accepts a single document at a time. It does not work with multiple documents in a single request. It expects a JSON object, and whenever it is sent something different it will respond with an error code.
If you're looking for batch inserts, please try the API POST /_api/import, described in the manual here: https://docs.arangodb.com/HttpBulkImports/ImportingSelfContained.html
This will work with multiple documents in a single request. ArangoDB 3.0 will also allow sending multiple documents to the POST /_api/document?collection=... API, but this version is not yet released. A technical preview will be available soon however.

Dealing with gzip encoded GET/OAUTH response in R

I'm new to both: R and OAUTH. I've learned a little using coursera examples on github API where OAUTH request gave plaintext response but now I'm trying to do something that is practicall for me and access EVE-Online CREST OAUTH API but instead of what I got when I tried github API (im using "httr" libary):
Response [https://api.github.com/users/jtleek/repos]
Date: 2014-12-14 08:57
Status: 200
Content-type: application/json; charset=utf-8
Size: 154 kB
[
{
"id": 12441219,
"name": "ballgown",
"full_name": "jtleek/ballgown",
"owner": {
"login": "jtleek",
"id": 1571674,
"avatar_url": "https://avatars.githubusercontent.com/u/1571674?v=3",
"gravatar_id": "",
...
I got this BINARY BODY response:
Response [https://crest-tq.eveonline.com/market/10000002/orders/buy/?type=https://crest-tq.eveonline.com/types/185/]
Date: 2014-12-14 08:05
Status: 200
Content-type: application/vnd.ccp.eve.MarketOrderCollection-v1+json; charset=utf-8
Size: 7.61 kB
<BINARY BODY>
And frankly I have no idea what to do with it. I'm preety sure its gzip (I used chrome extension postman to access the same information and header says its encoded with gzip) but I dont know how to uncompress it, maybe there is standard way of dealing with binary/gzip response but my google foo have failed me.
Here is exact code I'm running:
library(httr)
myapp <- oauth_app("my app name redacted", "my id redacted", "my secret redacted")
eve_token <- oauth2.0_token(oauth_endpoint(authorize = "https://login-tq.eveonline.com/oauth/authorize/",access = "https://login-tq.eveonline.com/oauth/token/"), myapp, scope = "publicData")
token <- config(token = eve_token)
req <- GET("https://crest-tq.eveonline.com/market/10000002/orders/buy/?type=https://crest-tq.eveonline.com/types/185/", token)
EDIT:
YES!!! :)
managed to figure it out :)
result <- content(req, type = "application/json; charset=utf-8")
while the reqular content(req) produced just raw binary data, the above translated it to json :)
Like I wrote above, what I needed to do was pass more information about content type and encoding used to content function like this:
result <- content(req, type = "application/json; charset=utf-8")
gzip part as its turned out was handled automagically, but the issue was strage content-type used by EVE API. when i explicitly passed desired content type R was able to read data as json without problem

Debugging RCurl-based authentication & form submission

SourceForge Research Data Archive (SRDA) is one of the data sources for my dissertation research. I'm having difficulty in debugging the following issue related to SRDA data collection.
Data collection from SRDA requires authentication and then submitting Web form with an SQL query. Upon successful processing of the query, the system generates a text file with query results. While testing my R code for SRDA data collection, I've changed the SQL request to make sure that the results file is being regenerated. However, I've discovered that the file contents stays the same (corresponds to previous query). I think that the lack of refresh of the file contents could be due to failure of either authentication, or query form submission. The following is the debug output from the code (https://github.com/abnova/diss-floss/blob/master/import/getSourceForgeData.R):
make importSourceForge
Rscript --no-save --no-restore --verbose getSourceForgeData.R
running
'/usr/lib/R/bin/R --slave --no-restore --no-save --no-restore --file=getSourceForgeData.R'
Loading required package: RCurl
Loading required package: methods
Loading required package: bitops
Loading required package: digest
Retrieving SourceForge data...
Checking request "SELECT *
FROM sf1104.users a, sf1104.artifact b
WHERE a.user_id = b.submitted_by AND b.artifact_id = 304727"...
* About to connect() to zerlot.cse.nd.edu port 80 (#0)
* Trying 129.74.152.47... * connected
> POST /mediawiki/index.php?title=Special:Userlogin&action=submitlogin&type=login HTTP/1.1
Host: zerlot.cse.nd.edu
Accept: */*
Content-Length: 37
Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 37out of 37 bytes
< HTTP/1.1 200 OK
< Date: Tue, 11 Mar 2014 03:49:04 GMT
< Server: Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.25 with Suhosin-Patch
< X-Powered-By: PHP/5.2.4-2ubuntu5.25
* Added cookie wiki_db_session="c61...a3c" for domain zerlot.cse.nd.edu, path /, expire 0
< Set-Cookie: wiki_db_session=c61...a3c; path=/
< Content-language: en
< Vary: Accept-Encoding,Cookie
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Cache-Control: private, must-revalidate, max-age=0
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
<
* Connection #0 to host zerlot.cse.nd.edu left intact
[1] "Before second postForm()"
* Re-using existing connection! (#0) with host zerlot.cse.nd.edu
* Connected to zerlot.cse.nd.edu (129.74.152.47) port 80 (#0)
> POST /cgi-bin/form.pl HTTP/1.1
Host: zerlot.cse.nd.edu
Accept: */*
Cookie: wiki_db_session=c61...a3c
Content-Length: 129
Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 129out of 129 bytes
< HTTP/1.1 500 Internal Server Error
< Date: Tue, 11 Mar 2014 03:49:04 GMT
< Server: Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.25 with Suhosin-Patch
< Vary: Accept-Encoding
< Connection: close
< Transfer-Encoding: chunked
< Content-Type: text/html
<
* Closing connection #0
Error: Internal Server Error
Execution halted
make: *** [importSourceForge] Error 1
I've tried to figure this out using debug output as well as Network protocol analyzer from Firefox embedded Developer Tools, but so far without much success. Would appreciate any advice and help.
UPDATE:
if (!require(RCurl)) install.packages('RCurl')
if (!require(digest)) install.packages('digest')
library(RCurl)
library(digest)
# Users must authenticate to access Query Form
SRDA_HOST_URL <- "http://zerlot.cse.nd.edu"
SRDA_LOGIN_URL <- "/mediawiki/index.php?title=Special:Userlogin"
SRDA_LOGIN_REQ <- "&action=submitlogin&type=login"
# SRDA URL that Query Form sends POST requests to
SRDA_QUERY_URL <- "/cgi-bin/form.pl"
# SRDA URL that Query Form sends POST requests to
SRDA_QRESULT_URL <- "/qresult/blekh/blekh.txt"
# Parameters for result's format
DATA_SEP <- ":" # data separator
ADD_SQL <- "1" # add SQL to file
curl <<- getCurlHandle()
srdaLogin <- function (loginURL, username, password) {
curlSetOpt(curl = curl, cookiejar = 'cookies.txt',
ssl.verifyhost = FALSE, ssl.verifypeer = FALSE,
followlocation = TRUE, verbose = TRUE)
params <- list('wpName1' = username, 'wpPassword1' = password)
if(url.exists(loginURL)) {
reply <- postForm(loginURL, .params = params, curl = curl,
style = "POST")
#if (DEBUG) print(reply)
info <- getCurlInfo(curl)
return (ifelse(info$response.code == 200, TRUE, FALSE))
}
else {
error("Can't access login URL!")
}
}
srdaConvertRequest <- function (request) {
return (list(select = "*",
from = "sf1104.users a, sf1104.artifact b",
where = "b.artifact_id = 304727"))
}
srdaRequestData <- function (requestURL, select, from, where, sep, sql) {
params <- list('uitems' = select,
'utables' = from,
'uwhere' = where,
'useparator' = sep,
'append_query' = sql)
if(url.exists(requestURL)) {
reply <- postForm(requestURL, .params = params, #.opts = opts,
curl = curl, style = "POST")
}
}
srdaGetData <- function(request) {
resultsURL <- paste(SRDA_HOST_URL, SRDA_QRESULT_URL,
collapse="", sep="")
results.query <- readLines(resultsURL, n = 1)
return (ifelse(results.query == request, TRUE, FALSE))
}
getSourceForgeData <- function (request) {
# Construct SRDA login and query URLs
loginURL <- paste(SRDA_HOST_URL, SRDA_LOGIN_URL, SRDA_LOGIN_REQ,
collapse="", sep="")
queryURL <- paste(SRDA_HOST_URL, SRDA_QUERY_URL, collapse="", sep="")
# Log into the system
if (!srdaLogin(loginURL, USER, PASS))
error("Login failed!")
rq <- srdaConvertRequest(request)
srdaRequestData(queryURL,
rq$select, rq$from, rq$where, DATA_SEP, ADD_SQL)
if (!srdaGetData(request))
error("Data collection failed!")
}
message("\nTesting SourceForge data collection...\n")
getSourceForgeData("SELECT *
FROM sf1104.users a, sf1104.artifact b
WHERE a.user_id = b.submitted_by AND b.artifact_id = 304727")
# clean up
close(curl)
UPDATE 2 (no functions version):
if (!require(RCurl)) install.packages('RCurl')
library(RCurl)
# Users must authenticate to access Query Form
SRDA_HOST_URL <- "http://zerlot.cse.nd.edu"
SRDA_LOGIN_URL <- "/mediawiki/index.php?title=Special:Userlogin"
SRDA_LOGIN_REQ <- "&action=submitlogin&type=login"
# SRDA URL that Query Form sends POST requests to
SRDA_QUERY_URL <- "/cgi-bin/form.pl"
# SRDA URL that Query Form sends POST requests to
SRDA_QRESULT_URL <- "/qresult/blekh/blekh.txt"
# Parameters for result's format
DATA_SEP <- ":" # data separator
ADD_SQL <- "1" # add SQL to file
message("\nTesting SourceForge data collection...\n")
curl <- getCurlHandle()
curlSetOpt(curl = curl, cookiejar = 'cookies.txt',
ssl.verifyhost = FALSE, ssl.verifypeer = FALSE,
followlocation = TRUE, verbose = TRUE)
# === Authentication ===
loginParams <- list('wpName1' = USER, 'wpPassword1' = PASS)
loginURL <- paste(SRDA_HOST_URL, SRDA_LOGIN_URL, SRDA_LOGIN_REQ,
collapse="", sep="")
if (url.exists(loginURL)) {
postForm(loginURL, .params = loginParams, curl = curl, style = "POST")
info <- getCurlInfo(curl)
message("\nLogin results - HTTP status code: ", info$response.code, "\n\n")
} else {
error("\nCan't access login URL!\n\n")
}
# === Data collection ===
# Previous query was: "SELECT * FROM sf0305.users WHERE user_id < 100"
query <- list(select = "*",
from = "sf1104.users a, sf1104.artifact b",
where = "b.artifact_id = 304727")
getDataParams <- list('uitems' = query$select,
'utables' = query$from,
'uwhere' = query$where,
'useparator' = DATA_SEP,
'append_query' = ADD_SQL)
queryURL <- paste(SRDA_HOST_URL, SRDA_QUERY_URL, collapse="", sep="")
if(url.exists(queryURL)) {
postForm(queryURL, .params = getDataParams, curl = curl, style = "POST")
resultsURL <- paste(SRDA_HOST_URL, SRDA_QRESULT_URL,
collapse="", sep="")
results.query <- readLines(resultsURL, n = 1)
request <- paste(query$select, query$from, query$where)
if (results.query == request)
message("\nData request is successful, SQL query: ", request, "\n\n")
else
message("\nData request failed, SQL query: ", request, "\n\n")
} else {
error("\nCan't access data query URL!\n\n")
}
close(curl)
UPDATE 3 (server-side debugging)
Finally, I was able to get in touch with a person responsible for the system and he helped me to narrow down the issue to cookie management IMHO. Here's the error log record, corresponding to running my code:
[Fri Mar 21 15:33:14 2014] [error] [client 54.204.180.203] [Fri Mar 21
15:33:14 2014] form.pl: /tmp/sess_3e55593e436a013597cd320e4c6a2fac:
at /var/www/cgi-bin/form.pl line 43
The following is the snippet of the server-side script (Perl) that generated that error (line #1 in the script is bash interpreter directive, so reported line number 43 is most likely line number 44):
42 if (-e "/tmp/sess_$file") {
43 $session = PHP::Session->new($cgi->cookie("$session_name"));
44 $user_id = $session->get('wsUserID');
45 $user_name = $session->get('wsUserName');
The following is a session information (1) after authentication and (2) after submitting data request, obtained by tracing manual authentication and manual data request form submission:
(1) "wiki_dbUserID=449; expires=Sun, 20-Apr-2014 21:04:14 GMT;
path=/wiki_dbUserName=Blekh; expires=Sun, 20-Apr-2014 21:04:14 GMT;
path=/wiki_dbToken=deleted; expires=Thu, 21-Mar-2013 21:04:13 GMT"
(2) wiki_db_session=aaed058f97059174a59effe44b137cbc;
_ga=GA1.2.2065853334.1395410153; EDSSID=e24ff5ed891c28c61f2d1f8dec424274; wiki_dbUserName=Blekh;
wiki_dbLoggedOut=20140321210314; wiki_dbUserID=449
Would appreciate any help in figuring out the problem with my code!
Finally, finally, finally! I have figured out what was causing this problem, which gave me so much headache (figuratively and literally). It forced me to spend a lot of time reading various Internet resources (including many SO questions and answers), debugging my code and communicating with people. I spent a lot of time, but not in vain, as I learned a lot about RCurl, cookies, Web forms and HTTP protocol.
The reason appeared much simpler than I thought. While the direct reason of the form submission failure was related to cookie management, the underlying reason was using wrong parameter names (IDs) of the authentication form fields. The two pairs were very similar and it took only one extra character to trigger the whole problem.
Lesson learned: when facing issues, especially ones dealing with authentication, it's very important to check all names and IDs multiple times and very carefully to make sure they correspond the ones supposed to be used. Thank you to everyone who was helping or trying to help me with this issue!
I've simplified the code still further:
library(httr)
base_url <- "http://srda.cse.nd.edu"
loginURL <- modify_url(
base_url,
path = "mediawiki/index.php",
query = list(
title = "Special:Userlogin",
action = "submitlogin",
type = "login",
wpName1 = USER,
wpPasswor1 = PASS
)
)
r <- POST(loginURL)
stop_for_status(r)
queryURL <- modify_url(base_url, path = "cgi-bin/form.pl")
query <- list(
uitems = "user_name",
utables = "sf1104.users a, sf1104.artifact b",
uwhere = "a.user_id = b.submitted_by AND b.artifact_id = 304727",
useparator = ":",
append_query = "1"
)
r <- POST(queryURL, body = query, multipart = FALSE)
stop_for_status(r)
But I'm still getting a 500. I tried:
setting extra cookies that I see in the browser (wiki_dbUserID, wiki_dbUserName)
setting header DNT to 1
setting referer to http://srda.cse.nd.edu/cgi-bin/form.pl
setting user-agent the same as chrome
setting accept "text/html"
The following provides clarification for the scenario (error situation).
From W3C RFC 2616 - HTTP/1.1 Specification:
10.5 Server Error 5xx
Response status codes beginning with the digit "5" indicate cases in
which the server is aware that it has erred or is incapable of
performing the request. Except when responding to a HEAD request, the
server SHOULD include an entity containing an explanation of the error
situation, and whether it is a temporary or permanent condition. User
agents SHOULD display any included entity to the user. These response
codes are applicable to any request method.
10.5.1 500 Internal Server Error
The server encountered an unexpected condition which prevented it from
fulfilling the request.
My interpretation of the paragraph 10.5 is that it implies that there should be a more detailed explanation of the error situation beyond the one provided in paragraph 10.5.1. However, I recognize that it very well may be that the message for status code 500 (paragraph 10.5.1) is considered sufficient. Confirmations for either of interpretations are welcome!

Resources