While integrating PayUMoney payment gateway in Test mode error generate 'sorry some error occurred ' - payu

When I use service_provider = payu_paisa then generates the error:
''sorry some error occurred '
without service_provider error into reponse
{
mihpayid: "403993715515010643",
mode: "CC",
status: "failure",
unmappedstatus: "failed",
key: "gtKFFx",
txnid: "a9de074d7d44e69e2ada",
amount: "232354.00",
cardCategory: "domestic",
discount: "0.00",
net_amount_debit: "0.00",
addedon: "2016-09-28 11:37:25",
productinfo: "shopping",
firstname: "sunil",
lastname: "",
address1: "surat",
address2: "surat",
city: "surat",
state: "gujarat",
country: "",
zipcode: "",
email: "sunil.1023p#gmail.com",
phone: "8978678798",
udf1: "",
udf2: "",
udf3: "",
udf4: "",
udf5: "",
udf6: "",
udf7: "",
udf8: "",
udf9: "",
udf10: "",
hash: "9725118686ef231af41264bdd12ab9f735abded558d3fbec5902d22ba5a2a6655af2f53bbc2938e7f320f928b6f119a003b856854e29d1fadbb4c59e421555cb",
field1: "",
field2: "",
field3: "",
field4: "",
field5: " !ERROR!-GV00010-Missing data typeError Code: GV00010",
field6: "",
field7: "",
field8: "failed in enrollment",
field9: " !ERROR!-GV00010-Missing data typeError Code: GV00010",
payment_source: "payu",
PG_TYPE: "HDFCPG",
bank_ref_num: "",
bankcode: "CC",
error: "E500",
error_Message: "Unknown Error Received from PG",
name_on_card: "sunil",
cardnum: "512345XXXXXX2346",
cardhash: "This field is no longer supported in postback params.",
issuing_bank: "HDFC",
card_type: "MAST"
}
Any can help me?

In new integration we do not need to send
service_provider = payu_paisa
The documentation have been updated with the new parameter. You can check the documentation at
https://developer.payubiz.in/documentation/Post-Request-%28Non-seamless%29/24
However, the development kits are yet to be updated.
Also,
There are some issues in processing HDFC test card details shared as sample by payu money. So as an alternative you can use any of the test cards from
https://www.paypalobjects.com/en_US/vhelp/paypalmanager_help/credit_card_numbers.htm
For Example you can user Amex card:
American Express
378282246310005
User Any name, any CVV, Any Expiry
PayuMoney will redirect you to Amex website. Since its a test card you can choose any of the response you want Amex to send.
Good Luck.
-Anuj

use fake account no from this link
use fake account no from
https://www.paypalobjects.com/en_US/vhelp/paypalmanager_help/credit_card_numbers.htm
if 5123456789012346 no is not work

Related

OpenAI package leaving linebreak in response

I've starting using OpenAI API in R. I downloaded the openai package. I keep getting a double linebreak in the text response. Here's an example of my code:
library(openai)
vector = create_completion(
model = "text-davinci-003",
prompt = "Tell me what the weather is like in London, UK, in Celsius in 5 words.",
max_tokens = 20,
temperature = 0,
echo = FALSE
)
vector_2 = vector$choices[1]
vector_2$text
[1] "\n\nRainy, mild, cool, humid."
Is there a way to get rid of this without 'correcting' the response text using other functions?
No, it's not possible.
The OpenAI API returns the completion with starting \n\n by default. There's no parameter for the Completions endpoint to control this.
You need to remove linebreak manually.
Example response looks like this:
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "text-davinci-003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}

Use scrapy to collect information for one item from multiple pages (and output it as a nested dictionary)

I'm trying to scrape data from a tournaments site.
Each tournament has some information such as the venue, the date, prices etc.
And also the rank of teams that took part. The rank is a table that simply provides the name of the team, and its position in the rank.
Then, you can click on the name of the team which takes you to a page were we can get the roster of players that the team selected for that tournament.
I'd like to scrape the data into something like:
[{
"name": "Grand Tournament",
"venue": "...",
"date": "...",
"rank": [
{"team_name": "Team name",
"rank": 1,
"roster": ["player1", "player2", "..."]
},
{"team_name": "Team name",
"rank": 2,
"roster": ["player1", "player2", "..."]
}
]
}]
I have the following spider to scrape a single tournament page (usage: scrapy crawl tournamentspider -a strat_url="<tournamenturl>")
class TournamentSpider(scrapy.Spider):
name = "tournamentspider"
allowed_domains = ["..."]
def start_requests(self):
try:
yield scrapy.Request(url=self.start_url, callback=self.parse)
except AttributeError:
raise ValueError("You must use this spider with argument start_url.")
def parse(self, response):
tournament_item = TournamentItem()
tournament_item['teams'] = []
tournament_item ['name'] = "Tournament Name"
tournament_item['date'] = "Date"
tournament_item['venue'] = "Venue"
ladder = response.css('#ladder')
for row in ladder.css('table tbody tr'):
row_cells = row.xpath('td')
participation_item = PlayerParticipationItem()
participation_item['team_name'] = "Team Name"
participation_item['rank'] = "x"
# Parse roster
roster_url_page = row_cells[2].xpath('a/#href').get()
# Follow link to extract list
base_url = urlparse(response.url)
absolute_url = f'{base_url.scheme}://{base_url.hostname}/{list_url_page}'
request = scrapy.Request(absolute_url, callback=self.parse_roster_page)
request.meta['participation_item'] = participation_item
yield request
# Include participation item in the roster
tournament_item['players'].append(participation_item)
yield tournament_item
def parse_roster_page(self, response):
participation_item = response.meta['participation_item']
participation_item['roster'] = ["Player1", "Player2", "..."]
return participation_item
My problem is that this spider produces the following output:
[{
"name": "Grand Tournament",
"venue": "...",
"date": "...",
"rank": [
{"team_name": "Team name",
"rank": 1,
},
{"team_name": "Team name",
"rank": 2,
}
]
},
{"team_name": "Team name",
"rank": 1,
"roster": ["player1", "player2", "..."]
},
{"team_name": "Team name",
"rank": 2,
"roster": ["player1", "player2", "..."]
}]
I know that those extra items in the output are generated by the yield request line. When I remove it, I'm no longer scraping the roster page, so the extra items disappear, but I no longer have the roster data.
Is is possible to get the output I'm aiming for?
I know that a different approach could be to scrape the tournament information, and then teams with a field that identifies the tournament. But I'd like to know if the initial approach is achievable.
you can use scrapy inline requests to to call parse_roster_page and you'll get the roster data without yielding it out.
The only change you need to include is the decorator #inline_requests with the function parse_roster_page.
from inline_requests import inline_requests
class TournamentSpider(scrapy.Spider):
def parse(self, response):
...
#inline_requests
def parse_roster_page(self, response):
...

Is there an R library or function for formatting international currency strings?

Here's a snippet of the JSON data I'm working with:
{
"item" = "Mexican Thing",
...
"raised": "19",
"currency": "MXN"
},
{
"item" = "Canadian Thing",
...
"raised": "42",
"currency": "CDN"
},
{
"item" = "American Thing",
...
"raised": "1",
"currency": "USD"
}
You get the idea.
I'm hoping there's a function out there that can take in a standard currency abbreviation and a number and spit out the appropriate string. I could theoretically write this myself except I can't pretend like I know all the ins and outs of this stuff and I'm bound to spend days and weeks being surprised by bugs or edge cases I didn't think of. I'm hoping there's a library (or at least a web api) already written that can handle this but my Googling has yielded nothing useful so far.
Here's an example of the result I want (let's pretend "currency" is the function I'm looking for)
currency("USD", "32") --> "$32"
currency("GBP", "45") --> "£45"
currency("EUR", "19") --> "€19"
currency("MXN", "40") --> "MX$40"
Assuming your real json is valid, then it should be relatively simple. I'll provide a valid json string, fixing the three invalid portions here: = should be :; ... is obviously a placeholder; and it should be a list wrapped in [ and ]:
js <- '[{
"item": "Mexican Thing",
"raised": "19",
"currency": "MXN"
},
{
"item": "Canadian Thing",
"raised": "42",
"currency": "CDN"
},
{
"item": "American Thing",
"raised": "1",
"currency": "USD"
}]'
with(jsonlite::parse_json(js, simplifyVector = TRUE),
paste(raised, currency))
# [1] "19 MXN" "42 CDN" "1 USD"
Edit: in order to change to specific currency characters, don't make this too difficult: just instantiate a lookup vector where "USD" (for example) prepends "$" and appends "" (nothing) to the raised string. (I say both prepend/append because I believe some currencies are always post-digits ... I could be wrong.)
pre_currency <- Vectorize(function(curr) switch(curr, USD="$", GDP="£", EUR="€", CDN="$", "?"))
post_currency <- Vectorize(function(curr) switch(curr, USD="", GDP="", EUR="", CDN="", "?"))
with(jsonlite::parse_json(js, simplifyVector = TRUE),
paste0(pre_currency(currency), raised, post_currency(currency)))
# [1] "?19?" "$42" "$1"
I intentionally left "MXN" out of the vector here to demonstrate that you need a default setting, "?" (pre/post) here. You may choose a different default/unknown currency value.
An alternative:
currency <- function(val, currency) {
pre <- sapply(currency, switch, USD="$", GDP="£", EUR="€", CDN="$", "?")
post <- sapply(currency, switch, USD="", GDP="", EUR="", CDN="", "?")
paste0(pre, val, post)
}
with(jsonlite::parse_json(js, simplifyVector = TRUE),
currency(raised, currency))
# [1] "?19?" "$42" "$1"

Elastic package in R: Sort for version > v5...not working

Using elastic version V5.1
I'm trying to use the example of index shakespeare.
Tried:
Search(index="shakespeare", type="act", sort = '{"_source": ["speaker:desc"] }', size = 5)
and
Search(index="shakespeare",body = '{"_source": ["play_name", "speaker", "text_entry"] }',
sort='{"_source": ["text_entry" : {"order" : "desc"}] }' ,q="york", size = 5)
But not getting the right results.
Can someone help me with the correct syntax for sort for version V5 above.
Thanks.
Okay, fix pushed.
Reinstall like devtools::install_github("ropensci/elastic")
Problem is explained here https://www.elastic.co/guide/en/elasticsearch/reference/current/fielddata.html
So to allow using sort on a field, need to enable fielddata on that field. so for the example above, do
library(elastic)
connect()
mapping_create("shakespeare", "act", update_all_types = TRUE, body = '{
"properties": {
"speaker": {
"type": "text",
"fielddata": true
}
}
}')
res <- Search("shakespeare", "act", body = '{"sort":[{"speaker":{"order" : "desc"}}]}')
vapply(res$hits$hits, "[[", "", c("_source", "speaker"))
#> [1] "ARCHBISHOP OF YORK" "VERNON" "PLANTAGENET" "PETO" "KING HENRY IV"
#> [6] "HOTSPUR" "FALSTAFF" "CHARLES" ""
does that work for you?

Seach website for phrase in R

I'd like to understand what applications of machine learning are being developed by the US federal government. The federal government maintains the website FedBizOps that contains contracts. The web site can be searched for a phrase, e.g. "machine learning", and a date range, e.g. "last 365 days" to find relevant contracts. The resulting search produces links that contain a contract summary.
I'd like to be able to pull the contract summaries, given a search term and a date range, from this site.
Is there any way I can scrape the browser rendered data in to R? A similar question exists on web scraping, but I don't know how to change the date range.
Once the information is pulled into R, I'd like to organize the summaries with a bubble chart of key phrases.
This may look like a site that uses XHR via javascript to retrieve the URL contents, but it's not. It's just a plain web site that can easily be scraped via standard rvest & xml2 calls like html_session and read_html. It does keep the Location: URL the same, so it kinda looks like XHR even thought it's not.
However, this is a <form>-based site, which means you could be generous to the community and write an R wrapper for the "hidden" API and possibly donate it to rOpenSci.
To that end, I used the curlconverter package on the "Copy as cURL" content from the POST request and it provided all the form fields (which seem to map to most — if not all — of the fields on the advanced search page):
library(curlconverter)
make_req(straighten())[[1]] -> req
httr::VERB(verb = "POST", url = "https://www.fbo.gov/index?s=opportunity&mode=list&tab=list",
httr::add_headers(Pragma = "no-cache",
Origin = "https://www.fbo.gov",
`Accept-Encoding` = "gzip, deflate, br",
`Accept-Language` = "en-US,en;q=0.8",
`Upgrade-Insecure-Requests` = "1",
`User-Agent` = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.41 Safari/537.36",
Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
`Cache-Control` = "no-cache",
Referer = "https://www.fbo.gov/index?s=opportunity&mode=list&tab=list",
Connection = "keep-alive",
DNT = "1"), httr::set_cookies(PHPSESSID = "32efd3be67d43758adcc891c6f6814c4",
sympcsm_cookies_enabled = "1",
BALANCEID = "balancer.172.16.121.7"),
body = list(`dnf_class_values[procurement_notice][keywords]` = "machine+learning",
`dnf_class_values[procurement_notice][_posted_date]` = "365",
search_filters = "search",
`_____dummy` = "dnf_",
so_form_prefix = "dnf_",
dnf_opt_action = "search",
dnf_opt_template = "VVY2VDwtojnPpnGoobtUdzXxVYcDLoQW1MDkvvEnorFrm5k54q2OU09aaqzsSe6m",
dnf_opt_template_dir = "Pje8OihulaLVPaQ+C+xSxrG6WrxuiBuGRpBBjyvqt1KAkN/anUTlMWIUZ8ga9kY+",
dnf_opt_subform_template = "qNIkz4cr9hY8zJ01/MDSEGF719zd85B9",
dnf_opt_finalize = "0",
dnf_opt_mode = "update",
dnf_opt_target = "", dnf_opt_validate = "1",
`dnf_class_values[procurement_notice][dnf_class_name]` = "procurement_notice",
`dnf_class_values[procurement_notice][notice_id]` = "63ae1a97e9a5a9618fd541d900762e32",
`dnf_class_values[procurement_notice][posted]` = "",
`autocomplete_input_dnf_class_values[procurement_notice][agency]` = "",
`dnf_class_values[procurement_notice][agency]` = "",
`dnf_class_values[procurement_notice][zipstate]` = "",
`dnf_class_values[procurement_notice][procurement_type][]` = "",
`dnf_class_values[procurement_notice][set_aside][]` = "",
mode = "list"), encode = "form")
curlconverter adds the httr:: prefixes to the various functions since you can actually use req() to make the request. It's a bona-fide R function.
However, most of the data being passed in is browser "cruft" and can be trimmed down a bit and moved into a POST request:
library(httr)
library(rvest)
POST(url = "https://www.fbo.gov/index?s=opportunity&mode=list&tab=list",
add_headers(Origin = "https://www.fbo.gov",
Referer = "https://www.fbo.gov/index?s=opportunity&mode=list&tab=list"),
set_cookies(PHPSESSID = "32efd3be67d43758adcc891c6f6814c4",
sympcsm_cookies_enabled = "1",
BALANCEID = "balancer.172.16.121.7"),
body = list(`dnf_class_values[procurement_notice][keywords]` = "machine+learning",
`dnf_class_values[procurement_notice][_posted_date]` = "365",
search_filters = "search",
`_____dummy` = "dnf_",
so_form_prefix = "dnf_",
dnf_opt_action = "search",
dnf_opt_template = "VVY2VDwtojnPpnGoobtUdzXxVYcDLoQW1MDkvvEnorFrm5k54q2OU09aaqzsSe6m",
dnf_opt_template_dir = "Pje8OihulaLVPaQ+C+xSxrG6WrxuiBuGRpBBjyvqt1KAkN/anUTlMWIUZ8ga9kY+",
dnf_opt_subform_template = "qNIkz4cr9hY8zJ01/MDSEGF719zd85B9",
dnf_opt_finalize = "0",
dnf_opt_mode = "update",
dnf_opt_target = "", dnf_opt_validate = "1",
`dnf_class_values[procurement_notice][dnf_class_name]` = "procurement_notice",
`dnf_class_values[procurement_notice][notice_id]` = "63ae1a97e9a5a9618fd541d900762e32",
`dnf_class_values[procurement_notice][posted]` = "",
`autocomplete_input_dnf_class_values[procurement_notice][agency]` = "",
`dnf_class_values[procurement_notice][agency]` = "",
`dnf_class_values[procurement_notice][zipstate]` = "",
`dnf_class_values[procurement_notice][procurement_type][]` = "",
`dnf_class_values[procurement_notice][set_aside][]` = "",
mode="list"),
encode = "form") -> res
This portion:
set_cookies(PHPSESSID = "32efd3be67d43758adcc891c6f6814c4",
sympcsm_cookies_enabled = "1",
BALANCEID = "balancer.172.16.121.7")
makes me think you should use html_session or GET at least once on the main URL to establish those cookies in the cached curl handler (which will be created & maintained automagically for you).
The add_headers() bit may also not be necessary but that's an exercise left for the reader.
You can find the table you're looking for via:
content(res, as="text", encoding="UTF-8") %>%
read_html() %>%
html_nodes("table.list") %>%
html_table() %>%
dplyr::glimpse()
## Observations: 20
## Variables: 4
## $ Opportunity <chr> "NSN: 1650-01-074-1054; FILTER ELEMENT, FLUID; WSIC: L SP...
## $ Agency/Office/Location <chr> "Defense Logistics Agency DLA Acquisition LocationsDLA Av...
## $ Type / Set-aside <chr> "Presolicitation", "Presolicitation", "Award", "Award", "...
## $ Posted On <chr> "Sep 28, 2016", "Sep 28, 2016", "Sep 28, 2016", "Sep 28, ...
There's an indicator on the page saying these are results "1 - 20 of 2008". You need to scrape that as well and deal with the paginated results. This is also left as an exercise to the reader.

Resources