I've been at this for hours now and I can't seem to get this feed to import to google calendar:
http://newtest.wpengine.com/programs/?ical=1
It imports fine when I download it then import it as a file. I even tried making it bare bones with just a few lines. It gets 100% in this validator: http://icalvalid.cloudapp.net/. Can anyone spot what i'm missing. here is the file
BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Blue Mandala Retreat Site
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Deryk Wenaus Blog
X-WR-TIMEZONE:
BEGIN:VEVENT
UID:http://ayahuas.ca/program/sample-program/
DTSTAMP:20130110T001222Z
CONTACT:
DESCRIPTION: a nice program. a nice program.
DTSTART;VALUE=DATE:20130124
DTEND;VALUE=DATE:20130127
LOCATION:
SUMMARY:sample program
URL:http://ayahuas.ca/program/sample-program/
END:VEVENT
END:VCALENDAR
the error i keep getting from google calendar is The address that you provided did not contain a calendar in a valid iCal or GData format.
I'm using header( 'Content-type: text/calendar; charset=UTF-8' ); as the header, and I've also tried header( 'Content-Disposition: inline; filename="calendar.ics"' ); with no luck.
Google Calendar uses GoogleBot to request information of the calendar on your site. Maybe your server is preventing GoogleBot for entering in the ICal URL. Check your robots.txt
it is most likely the fact that you have specified a URL property value which does not point to your ics file itself.
the same icalendar file without the URL property should work (I tried to upload your icalendar file in google calendar via the file and not the url and it worked but it kept on trying to load a calendar, when removing the URL property this disappeared).
Related
I want a code in javascript or php to extract m3u8 url from a webpage .
Means When we load a webpage code will automatically fetch its m3u8 link and display it in a webpage.
If anyone knows?
You have not provided any information,
You can use PHP with curl to get response from page then trim tha data ( if ) m3u8 link is embeded or saved on page directly you can trim data accordingly and then echo "$url";
In case website loads m3u8 from api server or from different url after starting session . i.e site.com/link.php?name=video
You need to send request to site.com/link.php?name=video with proper header , user-agent and cookies if required (api keys)
Then you can decode the json data and display your link.
I
I was going to post this in the Workbox Github repo, but I doubt it's a bug and more likely my own misunderstanding. I have found some slightly similar questions, but none of the answers seem to clearly explain how I can resolve my issue.
In my sw.js file I am precaching the Home URL and the Start URL. The Start URL is the exact same as the Home URL, except it appends ?utm_source=pwa to the URL. This is a technique I've read that others do to track PWA usage in Google Analytics and I like the idea.
However, now when a new user arrives at the website, they load the initial page and then Workbox fetches the Home URL and then fetches the Start URL. This means that if the user arrives at the homepage of the website they will have loaded that page 3 times. I'd like to figure out how to get Workbox to realize that the Home URL and Start URL are essentially the same and to not need that third fetch request.
I understand that ignoreUrlParametersMatching defaults to use [/^utm_/] which I would expect it to do as I described above, but perhaps I'm understanding it incorrectly and it does not apply to prefetched URLs...? Does it automatically apply if I don't explicitly call it from precacheAndRoute()?
To clarify my expectation of ignoreUrlParametersMatching would be that it precaches the Home URL and then when it attempts to cache the Start URL it ignores (removes) the UTM parameter, sees that it already has that URL cached and does not fetch. Then, when the Start URL is requested from cache, it again would ignore the UTM parameter and respond with the URL it has in cache. Is this far off from reality? If so, how should I do this to achieve both my tracking and reduce the "duplicate" fetch?
Here are some excerpts of my sw.js file:
const HOME_URL = 'https://gearside.com/nebula/';
const START_URL = 'https://gearside.com/nebula/?utm_source=pwa';
workbox.precaching.precacheAndRoute([
//...other precached files
{url: HOME_URL, revision: revisionNumber},
{url: START_URL, revision: revisionNumber},
]);
Both URLs are precached:
Shows both fetch requests:
Note: I've noticed this problem with or without revision numbers.
TL;DR
Do not include https://gearside.com/nebula/?utm_source=pwa in the precache manifest.
Use the workbox-google-analytics module:
import * as googleAnalytics from 'workbox-google-analytics';
googleAnalytics.initialize();
Long version
You should precache based on unique resources. Every entry defined in the precache manifest will be downloaded and cached.
If https://gearside.com/nebula/ and https://gearside.com/nebula/?utm_source=pwa serve the exact same content, only precache one of them (preferably the one without the query string).
The option ignoreURLParametersMatching serves to specify an array of regexes that will be tested against the query parameters, and if any of them matches, then the route match ignores such query parameter.
To exemplify,
precacheAndRoute([
{url: '/styles/main.css', revision: '777'},
], {
ignoreURLParametersMatching: [/.*/]
});
Will match any of these requests:
/styles/main.css
/styles/main.css?minified=0
/styles/main.css?minified=0&renew=1
and serve /styles/main.css, because the regex .* matches any query string.
The default value of ignoreURLParametersMatching is [/^utm_/]. If in the example above we skip ignoreURLParametersMatching, any of the following requests would be matched (and resolved with the precached /styles/main.css):
/styles/main.css
/styles/main.css?utm_hello=yes
/styles/main.css?utm_yes_what=dunno&utm_really=yeah
But the following requests will not go through the precache:
/styles/main.css?remodelate=expensive&utm_pwa=no
/styles/main.css?utm_spa=neither&trees=awesome
because none of them have exclusively only query parameters starting with utm_.
More info about the workbox-google-analytics module can be found here: Workbox Google Analytics
I am developing a web scraper and I need to download a .pdf file from a page. I can get the file name from the html tag, but can't find the complete url (or request body) that downloads the file.
I have tried to sniff the traffic with the chrome and firefox network traffic tool and with wireshark, with no success. I can see it make a post request to the exact same url as the page itself, and so I can't understand why this happens. My guess is that the filename is being sent inside the POST request body, but I also can't find that information in those tools. If I could see the variable name in the body, I could create a copy of the request and then get the file.
How can I get that information?
Here is the website I am talking about: http://www2.trt8.jus.br/consultaprocesso/formulario/ProcessoConjulgado.aspx?sDsTelaOrigem=ListarProcessos.aspx&iNrInstancia=1&sFlTipo=T&iNrProcessoVaraUnica=126&iNrProcessoUnica=1267&iNrProcessoAnoUnica=2010&iNrRegiaoUnica=8&iNrJusticaUnica=5&iNrDigitoUnica=24&iNrProcesso=1267&iNrProcessoAno=2010&iNrProcesso2a=0&iNrProcessoAno2a=0
EDIT: for those seeking to do something similar, take a look at this website: http://curl.trillworks.com/
It converts a cURL to a python requests code. Very useful
The POST data used for the request is encoded content generated by ASP.NET. It contains various state/session information of the page that the link is on. This makes it difficult to directly scrape for the URL.
You can examine the HAR by exporting it from the Network tab in Chrome DevTools:
The __EVENTVALIDATION data is used to ensure events raised on the client originate from the controls rendered on the page from the server.
You might be able to achieve what you want by requesting the page the link is on first, then extract the required POST data from the response (containing the page state and embedded request for file), and then make a new request with this information. This assumes the server doesn't expire any sessions in the meantime.
I have a link on a website to let users add an ICS feed to their google Calendar. Using this code:
http://www.google.com/calendar/render?cid=https://<etc>
It worked for 3-4 years but not anymore. The message Google sends me is:
This email address isn't associated with an active Google Calendar account: https://<etc>
If I enter the ics feed manually things work ok: the feed is parsed as should. No errors.
Any idea where to look to fix this?
I too have this problem. Did you solve it?
My tests show that it works when using URLs with http, like:
www.google.com/calendar/render?cid=http%3A%2%2Fwww.example.com%2FCalendar%2FPublic%2520Events.ics
But not with https, like:
www.google.com/calendar/render?cid=https%3A%2%2Fwww.example.com%2FCalendar%2FPublic%2520Events.ics
My workaround for this is to use http in the link, but redirect it to https in the web server. Not very elegant, but it works. The GET won't be encrypted, but at least the answer is.
EDIT: Actually, it can be a huge security risk sending the GET over http instead of https if you don't do any more authentication than via query string parameters, which can be hard for calendar feeds. Anyone who can sniff the GET can send the same request over https themselves.
Right now it works, even with https.
https://www.google.com/calendar/render?cid=https%3A%2F%2Fwww.example.com%2FCalendar%2FPublic%20Events.ics
Use webcal protocol for the calendar address:
https://www.google.com/calendar/render?cid=webcal%3A%2F%2Fwww.example.com%2FCalendar%2FPublic%20Events.ics
Ok, so for me, none of the other answers on this page worked. But I figured it out with bits and pieces of others:
Link to my calendar's ics:
https://example.com/calendar?id=a304036ea5a474ee5d80a100d79c231c
The right way to link it to Google Calendar:
https://calendar.google.com/calendar/r?cid=webcal%3A%2F%2Fexample.com%2Fcalendar%3Fid%3Da304036ea5a474ee5d80a100d79c231c
The key here is adding it using the webcal protocol instead of the https protocol.
Hope this helps anyone.
They got more strict.
Now you have to use
http://www.google.com/calendar/render?cid=http%3A%2F%2Fwww.example.com%2FCalendar%2FPublic%2520Events.ics
note the encoding.
I searched google for a good number of hours. Maybe I searched for the wrong keywords.
Here is what I want to do.
I'm posting data to a website which then makes a HTTP POST request and returns a .CGI webpage. I want to know the parameters the web page uses to send that HTTP POST request so that I can directly link a page from my Webpage to the final .CGI webpage by making the user enter the data on my own webpage.
How do I achieve it?
Usually the POST body is piped into STDIN, just read it as a normal file