Serving a vCard (.vcf) file through Iron Router - meteor

I'm trying to wrap my head around how I can deliver a file through Iron Router. Here is what I am trying to accomplish:
1) User opens URL like http://website.com/vcard/:_id
2) Meteor generates vCard file
BEGIN:VCARD
VERSION:3.0
N:Gump;Forrest;;Mr.
FN:Forrest Gump
ORG:Bubba Gump Shrimp Co.
TITLE:Shrimp Man
PHOTO;VALUE=URL;TYPE=GIF:http://www.example.com/dir_photos/my_photo.gif
TEL;TYPE=WORK,VOICE:(111) 555-1212
TEL;TYPE=HOME,VOICE:(404) 555-1212
ADR;TYPE=WORK:;;100 Waters Edge;Baytown;LA;30314;United States of America
LABEL;TYPE=WORK:100 Waters Edge\nBaytown\, LA 30314\nUnited States of Ameri
ca
ADR;TYPE=HOME:;;42 Plantation St.;Baytown;LA;30314;United States of America
LABEL;TYPE=HOME:42 Plantation St.\nBaytown\, LA 30314\nUnited States of Ame
rica
EMAIL;TYPE=PREF,INTERNET:forrestgump#example.com
REV:2008-04-24T19:52:43Z
END:VCARD
3) User gets .vcf file and it runs on their phone, Outlook, etc.
Thanks!

it has little to do with iron router. You need something that can return simple text file. Here is a demo which kind of does that:
http://meteorpad.com/pad/TbjQfAnmTAFQcyZ5a/Leaderboard

Related

Search text for geocoding Kapellskär harbour

We have difficulties geocoding a specific location, what would be an appropriate searchtext parameter to use to geocode the "Kapellskär" harbour? The harbour can be found in wego.here.com when searching for "Kapellskärs hamn, E18, SE-760 15 Norrtälje"
We have tried with:
Kapellskärs hamn, E18, SE-760 15 Norrtälje
E18, 76015 KAPELLSKÄR, SWEDEN
76015 KAPELLSKÄR, SWEDEN
Kappelskär 1, 76015 GRÄDDÖ, SWEDEN
Terminalbyggnaden, 76015 KAPELLSKÄR, SWEDEN
Gräddö, 76015 KAPELLSKÄR, SWEDEN
Finnlink, 76015 GRÄDDÖ, SWEDEN
Example request:
https://geocoder.api.here.com/6.2/geocode.json?searchtext=E18%2C%2076015%20KAPELLSK%C3%84R%2C%20SWEDEN&app_id=devportal-demo-20180625&app_code=9v2BkviRwi9Ot26kp2IysQ&gen=9
Closest we get is 3km away, which is close but not close enough. The harbour is a bit special since it doesn't have a street address, besides E18, which is 1890km long.
Please try using Micro Point Unit Addressing for GeoCoder API
https://geocoder.api.here.com/6.2/geocode.json?searchtext=E18%2C%2076015%20KAPELLSK%C3%84R%2C%20SWEDEN&app_id=xxxx&app_code=xxxxx&gen=9&additionaldata=IncludeMicroPointAddresses,true&locationattributes=mapReference
Please refer below document for more reference
developer.here.com/documentation/geocoder/topics/example-geocode-find-address-with-micropointaddress.html

How to extract data from HTML tags using python

I am trying to extract data from resumes. Im using pypandoc to convert docx to HTML.Below is the code which I used.
HTML file obtained is as below.
Can someone explain how to extract Work Histroy from this?
import pypandoc
output = pypandoc.convert_file('E:/cvparser/backupresumes/xyz.docx', 'html', outputfile="E:/cvparser/abc.html")
assert output == ""
print(output)
Here is the html file:
<p>PROFILE SUMMARY</p>
<ul>
<li><p>4 years of experience working in corporate environment as a full stack developer. Strong technical skills in complex website development including web based application.</p></li>
<li><p>ERP application development & enhancement, service delivery and client relationship management in education and industrial domain.</p></li>
</ul>
<p>EDUCATION</p>
<p>MCA (Master of Computer Applications) from CMR Institute of Management Studies – Bangalore University with 78%</p>
<p>BCA (Bachelor of Computer Applications) from Shri SVK College of Business and Management studies - Gulbarga University with 74%.</p>
<p>TECHNICAL SKILLS</p>
<p>Web Technologies: HTML/HTML5, CSS, JavaScript, Ajax, JSON, Apache, Bootstrap.</p>
<p>WORK HISTORY</p>
<ul>
<li><p>Leviosa Consulting Pvt Ltd from Feb 2015 to till date as a sr. Software Developer.</p></li>
<li><p>DRDO – Defence Research and Development Organization from Nov 2014 to Feb 2015 as a contract engineer.</p></li>
</ul>
<p>PROJECTS</p>
<p><strong>I1ERP – Manufacturing Industry</strong></p>
<p>Technologies Used: PHP, MySQL, HTML, CSS, Ajax, Bootstrap, Angular 6.</p>
<p>Duration: 1 Year.</p>
<ul>
<li><p>I1ERP is a fully custom designed application software which itself builds another application without writing code.</p></li>
<li><p>Anyone having knowledge of computer can use this app and build application based on the user requirements.</p></li>
<li><p>This automate and streamline business processes with greater adoptability.</p></li>
<li><p>I1ERP integrates all facets of an operation including product planning, manufacturing, sales, invoice, marketing and Human Resource.</p></li>
</ul>
This software has modules like Document Mgmt., Reminder System, Checklist System, Work Tracking System and Password Mgmt.</p>
<p>PERSONAL DETAILS</p>
<p>Date of Birth: 5<sup>th</sup> Feb 1990</p>
<p>Marital Status: Unmarried</p>
<p>Nationality: Indian</p>
<p>Languages Known: English, Kannada, Telugu and Hindi.</p>
Can someone explain how to extract Work History from this?
Here is one possible solution using beautifulsoup. Variable data contains the HTML text from the question:
from bs4 import BeautifulSoup
soup = BeautifulSoup(data, 'html.parser')
for tag in soup.select('p:contains("WORK HISTORY") ~ *:not(p:contains("WORK HISTORY") ~ p, p:contains("WORK HISTORY") ~ p ~ *)'):
print(tag.get_text(strip=True, separator='\n'))
Prints:
Leviosa Consulting Pvt Ltd from Feb 2015 to till date as a sr. Software Developer.
DRDO – Defence Research and Development Organization from Nov 2014 to Feb 2015 as a contract engineer.
import pypandoc
from bs4 import BeautifulSoup
output = pypandoc.convert_file('E:/cvparser/backupresumes/Bapuray.docx', 'html', outputfile="E:/cvparser/Bap.html")
assert output == ""
with open('E:/cvparser/Bap.html') as report:
raw = report.readlines()
str = """""".join(raw)
#print(str)
soup = BeautifulSoup(str, 'html.parser')
for tag in soup.select('p:contains("WORK HISTORY") ~ *:not(p:contains("WORK HISTORY") ~ p, p:contains("WORK HISTORY") ~ p ~ *)'):
print(tag.get_text(strip=True, separator='\n'))
I got the below error:
NotImplementedError: Only the following pseudo-classes are implemented: nth-of-type

Extract relevant text from a .txt file in R

I am still on a basic beginner level with r. I am currently working on some natural language stuff and I use the ProQuest Newsstand database. Even though the database allows to download txt files, I don't need everything they provide. The files you can download there look like this:
###############################################################################
____________________________________________________________
Report Information from ProQuest 16 July 2016 09:58
____________________________________________________________
____________________________________________________________
Inhaltsverzeichnis
1. Savills cracks Granite deal to establish US presence ; COMMERCIAL PROPERTY
____________________________________________________________
Dokument 1 von 1
Savills cracks Granite deal to establish US presence ; COMMERCIAL PROPERTY
http:...
Kurzfassung: Savills said that as part of its plans to build...
Links: ...
Volltext: Property agency Savills yesterday snapped up US real estate banking firm Granite Partners...
Unternehmen/Organisation: Name: Granite Partners LP; NAICS: 525910
Titel: Savills cracks Granite deal to establish US presence; COMMERCIAL PROPERTY:   [FIRST Edition]
Autor: Steve Pain Commercial Property Editor
Titel der Publikation: Birmingham Post
Seiten: 30
Seitenanzahl: 0
Erscheinungsjahr: 2007
Publikationsdatum: Aug 2, 2007
Jahr: 2007
Bereich: Business
Herausgeber: Mirror Regional Newspapers
Verlagsort: Birmingham (UK)
Publikationsland: United Kingdom
Publikationsthema: General Interest Periodicals--Great Britain
Quellentyp: Newspapers
Publikationssprache: English
Dokumententyp: NEWSPAPER
ProQuest-Dokument-ID: 324215031
Dokument-URL: ...
Copyright: (Copyright 2007 Birmingham Post and Mail Ltd.)
Zuletzt aktualisiert: 2010-06-19
Datenbank: UK Newsstand
____________________________________________________________
Kontaktieren Sie uns unter: http... Copyright © 2016 ProQuest LLC. Alle Rechte vorbehalten. Allgemeine Geschäftsbedingungen: ...
###############################################################################
What I need is a way to extract only the full text to a csv file. The reason is, when I download hundreds of articles within one file it is quite difficult to copy and paste them manually and I think the file is quite structured. However, the length of text varies. Nevertheless, one could use the next header after the full text as a stop sign (I guess).
Is there any way to do this?
I really would appreciate some help.
Kind regards,
Steffen
Lets say you have all publication information in a single text file make a copy of your file for reset first. Using Notepad++ and RegEx you'd go through following steps:
Ctrl+F
Choose the Mark tab.
Search mode: Regular expression
Find what: ^Volltext:\s
Alt+M to check Bookmark line (if unchecked only)
Click on Mark All
From the main menu go to: Search > Bookmark > Remove Unmarked Lines
In a third step go through following steps:
Ctrl+H
Search mode: Regular expression
Find what: ^Volltext:\s (choose from dropdown)
Replace with: NOTHING (clear text field)
Click on Replace All
Done ...
Try this out:
con <- file("./R/sample text.txt")
content <- paste(readLines(con),collapse="\n")
content <- gsub(pattern = "\\n\\n", replacement = "\n", x = content)
close(con)
content.filtered <- sub(pattern = "(.*)(Volltext:.*?)(_{10,}.*)",
replacement = "\\2", x=content)
Results:
> cat(content.filtered)
Volltext: Property agency Savills yesterday snapped up US real estate banking firm Granite Partners...
Unternehmen/Organisation: Name: Granite Partners LP; NAICS: 525910
Titel: Savills cracks Granite deal to establish US presence; COMMERCIAL PROPERTY: [FIRST Edition]
Autor: Steve Pain Commercial Property Editor
Titel der Publikation: Birmingham Post
Seiten: 30
Seitenanzahl: 0
Erscheinungsjahr: 2007
Publikationsdatum: Aug 2, 2007
Jahr: 2007
Bereich: Business
Herausgeber: Mirror Regional Newspapers
Verlagsort: Birmingham (UK)
Publikationsland: United Kingdom
Publikationsthema: General Interest Periodicals--Great Britain
Quellentyp: Newspapers
Publikationssprache: English
Dokumententyp: NEWSPAPER
ProQuest-Dokument-ID: 324215031
Dokument-URL: ...
Copyright: (Copyright 2007 Birmingham Post and Mail Ltd.)
Zuletzt aktualisiert: 2010-06-19
Datenbank: UK Newsstand

RSS feed for gas prices and how to interpret the feed

I am trying to add a RSS feed of gas prices based on location to my application.
I googled for RSS feed for gas prices and bumped onto Motortrend's gas price feed
http://www.motortrend.com/widgetrss/gas-
The feed seems to be fine, but the price value seem to be depicted in alphabets as below
Chevron 3921 Irvine Blvd, Irvine, CA 92602 (0.0 miles)
Monday, May 10, 2010 9:16 AM
Regular: ZEIECHK Plus: ZEHGIHC Premium: ZEGJEGE Diesel: N/A
How do I interpret these value to come up with a value for the gas price? Or is it internal to Motortrend's and cannot be used elsewhere?
View the source of Vista Motortrend Sidebar by downloading file, renaming file extension to .zip. Then unzip source and look at /js/gas.js file. You will find a js function called parsePrice(). It is basically a character conversion to find the Unicode value, and some simple math.

Does QExo XQuery string-join work?

When I run string-join, I get a weird output like this:
<clinic>
<Name>Olive Street Pediatrics</Name>
<Address>1500 Olive St, Los Angeles, CA 90015</Address>
<PhoneNumberList>'\u04bc','\u04e4'</PhoneNumberList>
<NumberOfPatientGroups>4</NumberOfPatientGroups>
</clinic>
Notice the PhoneNumberList.
The output reported by Altova XMLSpy looks correct (using the same XQuery file)
<clinic>
<Name>Olive Street Pediatrics</Name>
<Address>1500 Olive St, Los Angeles, CA 90015</Address>
<PhoneNumberList>213-512-7457,213-512-7465</PhoneNumberList>
<NumberOfPatientGroups>4</NumberOfPatientGroups>
</clinic>
Does string-join work on Qexo?
Here is my XML file
Here is my Xquery file
I use kawa-1.9.1.jar
In this case Altova is producing the correct result. The bug is with Qexo.
Running the query in another processor (XQSharp) produced the same result as Altova.

Resources