Why doesnt IFTTT like my RSS feed? - rss

I have followed the instructions on IFTTT and I have also checked the feed in a validator and it says that the feed is fine.
An example from the feed is here
<rss version="2.0">
<channel>
<title>CG Tipster - Tips of the day</title>
<link>http://www.facebook.com/CGTipster</link>
<description>A small selection of tips from CG Tipster</description>
<item>
<title>349465</title>
<link>http://www.cgtipster.com</link>
<description>
Kuwait SC vs Al-Jahra, Kuwait - Premier League - 16:15 - Home Win
</description>
<pubDate>Thu, 29 Dec 2016 12:03:25 GMT</pubDate>
<guid isPermaLink="false">tag:349465</guid>
</item>
</channel>
</rss>
IFTTT say that only the GUID and Date for each entry which this has.
I have tried changing the titles, URLS, dates and guids, but it still wont trigger.
Any ideas would be greatly appreciated.

Related

Can't webscrape with R the site of Fitch Ratings

I'm trying to scrape the website of Fitch Ratings and until now I can't get what I wanted: the list of ratings. When I scrape with R it returns the header of the website and in the body it gets an "iframe" from googleTagManager the "hide" the content that matters.
website: https://www.fitchratings.com/site/search?content=research&filter=RESEARCH%20LANGUAGE%5EPortuguese%2BGEOGRAPHY%5EAmericas%2BREPORT%20TYPE%5EHeadlines%5ERating%20Action%20Commentary
return:
[1] <head>\n<title>Search - Fitch Ratings</title>\n<!-- headerScripts --><!-- --><meta http-equiv="Content-Type" content="text/html; chars ...
[2] <body id="search-results">\n <div id="privacy-policy-tos-modal-container"></div>\n <!-- Google Tag Manager (noscript) -- ...
_____________
What I want:
Date;Research;Type;Text
04 Sep 2019; Fitch afirma Rating de Qualidade(...);Rating Action Commentary;Fitch Ratings-Sao Paulo - 04 September 2019: A Fitch Ratings Afirmou hoje, o Rating de Qualidade de Gestão de Ivnestimento 'Excelente' (...)
02 Sep 2019; Fitch Eleva Rating (...); Rating Action Commentary; Fitch Ratings - Sao Paulo - 02 September 2019: A Fitch Ratings elevou hoje (...)
Code below
html_of_site <- read_html(url("https://www.fitchratings.com/site/search?content=research&filter=RESEARCH%20LANGUAGE%5EPortuguese%2BGEOGRAPHY%5EAmericas%2BREPORT%20TYPE%5EHeadlines%5ERating%20Action%20Commentary"))
html_of_site
Short Answer: Don't scrape this website.
Long Answer: Technically it is possible to scrape this site, but you need your code to act like a human. What this means is that you would need to convince Fitch Group's server that you are indeed a human visitor and not a bot.
To do this you need to:
Send the same headers that your browser would send to the site
Keep track of any cookies the site sends back to you and return them in subsequent requests if necessary
Evaluate any scripts sent back by the server (to actually load the data you want).
I wasn't able to access the site policy for the thefitchgroup.com, but I assume it includes clauses about what bots are and are not allowed to do on the site. Since this company likely sells the data you are trying to scrape, you should probably avoid scraping this site.
In general, don't scrape sites without reading the site policies first. If the data you are scraping is not free without scraping it, then you probably shouldn't be scraping it.

Request External Password protected XML information and parse to my Classic ASP page

I am in charge of Web Design and Marketing for the company I work for. I kind of got thrown into the role of Coding.
We use an eCommerce software to help with site structure and product information. This site uses Classic ASP to form all pages in order to make the process easier.
Well now I have been given an XML Data Feed instructions, but I have no idea where to begin. I have read some different posts, but none really give examples of what I am trying to do.
What I need to do is submit a request to "https://ec.synnex.com/SynnexXML/PriceAvailability" which is password protected and get the return xml.
Request sent to https://ec.synnex.com/SynnexXML/PriceAvailability using below xml:
<?xml version="1.0" encoding="UTF-8" ?>
<priceRequest>
<customerNo>YOUR_ID</customerNo>
<userName>USERNAME</userName>
<password>PASSWORD</password>
<skuList>
<mfgPN>PRODUCTPARTNUMBER</mfgPN>
<lineNumber>1</lineNumber>
</skuList>
</priceRequest>
and it will return the xml:
<?xml version="1.0" encoding="UTF-8" ?>
<priceResponse>
<customerNo>YOUR ACCOUNT NUMBER</customerNo>
<userName>YOUR ID</userName>
<PriceAvailabilityList>
<mfgPN>108R00645</mfgPN>
<mfgCode>13439</mfgCode>
<status>Active</status>
<description>IMAGING UNIT, PHASER 6300/6350</description>
<GlobalProductStatusCode>Active</GlobalProductStatusCode>
<price>228.48</price>
<totalQuantity>240</totalQuantity>
<AvailabilityByWarehouse>
<warehouseInfo>
<number>3</number>
<zipcode>94538</zipcode>
<city>Fremont, CA</city>
<addr>44211 Nobel Drive</addr>
</warehouseInfo>
<qty>30</qty>
</AvailabilityByWarehouse>
<AvailabilityByWarehouse>
<warehouseInfo>
<number>4</number>
<zipcode>30071</zipcode>
<city>Norcross, GA</city>
<addr>200 Best Friend Court, Suite# 250</addr>
</warehouseInfo>
<qty>27</qty>
</AvailabilityByWarehouse>
<AvailabilityByWarehouse>
<warehouseInfo>
<number>5</number>
<zipcode>75081</zipcode>
<city>Richardson, TX</city>
<addr>660 N Dorothy Drive, Suite 100</addr>
</warehouseInfo>
<qty>2</qty>
</AvailabilityByWarehouse>
<lineNumber>1</lineNumber>
</PriceAvailabilityList>
</priceResponse>
I have no idea even where to even begin with this.
Once I get the information sent and it comes back I'm sure I can assign a Dim Value and then use the call for the value in the code I already have for displaying our products. <%=whatevervalue%>
Any help would be much appreciated.

Why do some hotels show different available amenities in HOD than SOAP XML?

I've found that for at least one hotel amenity - Eco Friendly Certified - I don't appear to be finding any hotels where the HotelPropertyDescriptionLLSRQ/HotelPropertyDescriptionRQ call is returning the correct data. Here's an example hotel (0023585):
>HOD0023585
** DOUBLE CLICK ON HOTEL NAME FOR MAPS AND PHOTOS **
LX0023585 LEFAY RESORT SPA LAGO DI GARDA VRN
ADDR- VIA FELTRINELLI 136
GARGNANO IT 25084 LAKE GARDA
[...]
PROPERTY INFORMATION
[...]
ADAA -N- FSPA -Y- ADLT -N- ECOH -Y-
Note that it has a Y for ECOH. It also shows the hotel if you do an availability request requiring that flag to be set.
Yet when I issue the HotelPropertyDescriptionRQ request for it, here is the relevant section of the response:
<HotelPropertyDescriptionRS xmlns="http://webservices.sabre.com/sabreXML/2011/10" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:stl="http://services.sabre.com/STL/v01" Version="2.1.0">
<stl:ApplicationResults status="Complete">
<stl:Success timeStamp="2019-02-27T09:20:30-06:00" />
</stl:ApplicationResults>
<RoomStay>
<BasicPropertyInfo ChainCode="LX" GeoConfidenceLevel="0" HotelCityCode="VRN" HotelCode="0023585" HotelName="LEFAY RESORT SPA LAGO DI GARDA" Latitude="45.687153" Longitude="10.643102" NumFloors="2" RPH="001">
<Address>
<AddressLine>VIA FELTRINELLI 136</AddressLine>
<AddressLine>GARGNANO IT 25084</AddressLine>
<CountryCode>IT</CountryCode>
</Address>
[...]
<PropertyOptionInfo>
[...]
<EcoCertified Ind="false" />
I have not been able to find a single hotel where EcoCertified is actually set to true.
Is this something that's actually controlled in two different places on the back end (a setting for ECOH in cryptic and a setting for EcoCertified in XML)? Or am I somehow doing something wrong?

Mailchimp RSS campaign only includes 1 post

Setting up an RSS campaign with Mailchimp, and hit a roadblock. The import seems to work, the design looks great, but we only are able to ever get one post -- the most recent one-- into the email.
The RSS feed is: https://our.news/feed/trending
We have verified that pubDate is included and properly formatted on all items, ie:
<item>
<title>The FBI is warning you to reboot your router to prevent a new attack here’s everything you need to do</title>
<link>https://our.news/2018/05/30/the-fbi-is-warning-you-to-reboot-your-router-to-prevent-a-new-attack-heres-everything-you-need-to-d/</link>
<comments>https://our.news/2018/05/30/the-fbi-is-warning-you-to-reboot-your-router-to-prevent-a-new-attack-heres-everything-you-need-to-d/#comments</comments>
<pubDate>Wed, 30 May 2018 07:33:04 +0000</pubDate>
<dc:creator><![CDATA[OurBot]]></dc:creator>
<category><![CDATA[Headlines]]></category>
<guid isPermaLink="false">https://our.news/?p=103857</guid>
<description><![CDATA[BUSINESSINSIDER.COM – On Friday, the FBI said anyone who uses a router to connect to the internet should reboot their routers. That will “temporarily disrupt...]]></description>
<wfw:commentRss>https://our.news/2018/05/30/the-fbi-is-warning-you-to-reboot-your-router-to-prevent-a-new-attack-heres-everything-you-need-to-d/feed/</wfw:commentRss>
<slash:comments>1</slash:comments>
<media:content xmlns:media="http://search.yahoo.com/mrss/" medium="image" type="image/jpeg" url="https://dsezcyjr16rlz.cloudfront.net/wp-content/uploads/2018/05/30023303/httpsamp.businessinsider.comimages5b0d64001ae66220008b47d5640320.1.jpg.jpg" width="150" height="75" />
</item>
The specific email design template we're using is simple, relevant section is:
*|RSSITEMS:[$count=5]|*
<span style="float:left">*|RSSITEM:IMAGE|* </span>
*|RSSITEM:TITLE|*
*|END:RSSITEMS|*
This happens in Preview Mode, in the Test Email, AND in the actual weekly campaign sends. The campaign is set to send weekly, and when it does, it only includes the first item from the list. Ideally, we'd like this to always just include the most recent 5 items. Anyone have any ideas?
Try using a FeedBlock
*|FEEDBLOCK:https://www.url.com/test.xml|*
*|FEEDITEMS:[$count=5]|*
<span style="float:left">*|FEEDITEM:IMAGE|* </span>
*|FEEDITEM:TITLE|*
*|END:FEEDITEMS|*

How to pull Deposits from QuickBooksOnline using IntuitAnywhere

I am attempting to pull all the General Ledger entries from QuickBooksOnline into my C# Asp.net application for a given date range. I have been able to successfully pull Bills, Checks, and JournalEntries that match the Profit and Loss Detail report I'm using for reference. However, I seem to be missing all "Deposit" types from that report. I am pulling data for Invoices and Payments but they are coming back empty for the TxnDates I'm looking for.
In case it helps I'm including the Request and Response xml logs for Invoices and Payments.
Invoice Request
Filter=TxnDate :AFTER: 2013-02-28T00:00:00-05:00 :AND: TxnDate :BEFORE: 2013-04-01T00:00:00-04:00&PageNum=1&ResultsPerPage=100
Invoice Response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><qbo:SearchResults xmlns="http://www.intuit.com/sb/cdm/v2" xmlns:qbp="http://www.intuit.com/sb/cdm/qbopayroll/v1" xmlns:qbo="http://www.intuit.com/sb/cdm/qbo"><qbo:CdmCollections xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Invoices"/><qbo:Count>0</qbo:Count><qbo:CurrentPage>1</qbo:CurrentPage></qbo:SearchResults>
Payment Request
Filter=TxnDate :AFTER: 2013-02-28T00:00:00-05:00 :AND: TxnDate :BEFORE: 2013-04-01T00:00:00-04:00&PageNum=1&ResultsPerPage=100
Payment Response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><qbo:SearchResults xmlns="http://www.intuit.com/sb/cdm/v2" xmlns:qbp="http://www.intuit.com/sb/cdm/qbopayroll/v1" xmlns:qbo="http://www.intuit.com/sb/cdm/qbo"><qbo:CdmCollections xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Payments"/><qbo:Count>0</qbo:Count><qbo:CurrentPage>1</qbo:CurrentPage></qbo:SearchResults>
Deposits are not the same thing as an invoice or a payment. Deposits are a separate transaction indicating a deposit of a payment, to the bank.
According to Intuit's documentation, querying for deposits is not supported by the v2 APIs.

Resources