How to create a data:application link for a web font? - css

Since web fonts have some ins-and-outs pertaining to cross-domain hosting, being a developer who provides code for a multitude of clients that want to use such web fonts to leverage their aesthetic quality, can be challenging especially when trying to detail the technical steps of hosting a file and making sure the URL path is pointing to it properly.
Recently, I have come across a webfont that uses a
data:application/font-woff;charset=utf-8;base64, "longHash"
nomenclature and I am not familiar with this.
One great benefit of this is that it seems that this doesn't have the cross-domain pitfalls of using a URL for a font, example here:
http://jsfiddle.net/9336yqkL/1/
If you look at the link you can see that it's a series of alphanumerical characters quite long in length where the URL path typically is.
I wonder, how does one create a path like this?
Help is always appreciated!

Base64 is an encoding scheme for binary. You can use a decoder to get the binary contents of the alphanumerical text that comes after base64,. The entire process is called Data URI Scheme and has a list of pros and cons for using it. http://en.wikipedia.org/wiki/Data_URI_scheme#Advantages_and_disadvantages

Related

Is there any way to download all google Webfont's in all formats?

First of all i am well aware of this Question and i have a strong feeling that it was closed because it was asked by a google critic.
My sub-question is: Am i right to assume that the processing that is described here is done after the ttf creation meaning it would not be possible to just batch convert them all together? If please tell me how (on linux)
Even if it would prefer the actually files google serves rather then having web services or tools convert them because google might done this the absolute best way possible!
I have some feeling that there is something out there, somewhere. Because it maybe just needs a script that changes the browser referrer and asks google http responses, downloads them and renames them to names based on the 'local' fontname in the css. In addition some web scraping to the the font names and the possible variation (bold/normal/...)
This is why i am trying this again in addition to that: I am a web Programmer i need this for my localhost web development so this is a this is a perfectly valid question for Stack Overflow in my mind and i am sure many have the desire to get hands of all fonts an easy way. So please at least migrate it somewhere it you think its not a valid question!
Extract the list of available fonts from https://fonts.google.com/metadata/fonts. Then use goog-webfont-dl to download all the formats of each font. I used the command cat ~/fonts.txt| while read line; do goog-webfont-dl -a -f $line;done to download each font in the list. Here is a gist with all the fonts up to 2016-07-12.

Can anyone provide a good info on the various uses of hash(#) in urls?

I'm developing a software, which is going to provide in-deep information about url's.
While the get-params are simple, I'm having trouble with the hash.
At first it was used to mark places in the document to navigate to, but we're past that now. I've seen JS engines using it to store params similar to the get strings.
So, here's my question: is everything that comes after a hash free game, or are there any conventions about what it should look like?
Try these sites it could help. Fragment Identifier, Wikipedia or Pound Sign, Google
It's got a list of examples you could use.
It all depends on what you need. Hashes are used in modern web applications that make use of asynchronous calls to the server using ajax. This e.g. allows the user to copy the link and receive the same content after pasting (actions taken are put into hash which changes the url which otherwise would remain static).
You want to read http://www.jenitennison.com/blog/node/154

Localization - First Steps

I'm pretty much after people opinions/best practices and nuggets of experience here.
I need to produce a new website in ASP.net C# which has the requirement of changing the language based on the user profiles.
I've done a couple of simple samples before but I'm curious on a slightly lower level. I'm after resources which I can read and review really.
What design patterns are in place for doing things like translating grids of data into different cultures.
If I'm going to store currency info, is it standard practice to store the exchange rates also?
If I'm going to down the route of a standard ASP.net web application can I use URL routing to help pick the culture to use? for instance www.mynewsite.com/en-GB/default.aspx.
Wisdom/Thoughts welcome.
Thanks for looking and thanks more for answering,
Mike
A couple of things that I've learned:
Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language.
Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs.
If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions.
If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there.
Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above.
You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it.
Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test.
Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files.
Test. Generally speaking I will test in German, Polish, Hebrew or Arabic, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support, and Hebrew and Arabic are both right to left languages.

How to scrape websites such as Hype Machine?

I'm curious about website scraping (i.e. how it's done etc..), specifically that I'd like to write a script to perform the task for the site Hype Machine.
I'm actually a Software Engineering Undergraduate (4th year) however we don't really cover any web programming so my understanding of Javascript/RESTFul API/All things Web are pretty limited as we're mainly focused around theory and client side applications.
Any help or directions greatly appreciated.
The first thing to look for is whether the site already offers some sort of structured data, or if you need to parse through the HTML yourself. Looks like there is an RSS feed of latest songs. If that's what you're looking for, it would be good to start there.
You can use a scripting language to download the feed and parse it. I use python, but you could pick a different scripting language if you like. Here's some docs on how you might download a url in python and parse XML in python.
Another thing to be conscious of when you write a program that downloads a site or RSS feed is how often your scraping script runs. If you have it run constantly so that you'll get the new data the second it becomes available, you'll put a lot of load on the site, and there's a good chance they'll block you. Try not to run your script more often than you need to.
You may want to check the following books:
"Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL"
http://www.amazon.com/Webbots-Spiders-Screen-Scrapers-Developing/dp/1593271204
"HTTP Programming Recipes for C# Bots"
http://www.amazon.com/HTTP-Programming-Recipes-C-Bots/dp/0977320677
"HTTP Programming Recipes for Java Bots"
http://www.amazon.com/HTTP-Programming-Recipes-Java-Bots/dp/0977320669
I believe that the most important thing you must analyze is which kind of information do you want to extract. If you want to extract entire websites like google does probably your best option is to analyze tools like nutch from Apache.org or flaptor solution http://ww.hounder.org If you need to extract particular areas on unstructured data documents - websites, docs, pdf - probably you can extend nutch plugins to fit particular needs. nutch.apache.org
On the other hand if you need to extract particular text or clipping areas of a website where you set rules using DOM of the page probably what you need to check is more related to tools like mozenda.com. with those tools you will be able to set up extraction rules in order to scrap particular information on a website. You must take into consideration that any change on a webpage will give you an error on your robot.
Finally, If you are planning to develop a website using information sources you could purchase information from companies such as spinn3r.com were they sell particular niches of information ready to be consume. You will be able to save lots of money on infrastructure.
hope it helps!.
sebastian.
Python has the feedparser module, located at feedparser.org that actually handles RSS in its various flavours and ATOM in its various flavours. No reason to reinvent the wheel.

What do I need to know to globalize an asp.net application?

I'm writing an asp.net application that will need to be localized to several regions other than North America. What do I need to do to prepare for this globalization? What are your top 1 to 2 resources for learning how to write a world ready application.
A couple of things that I've learned:
Absolutely and brutally minimize the number of images you have that contain text. Doing so will make your life a billion percent easier since you won't have to get a new set of images for every friggin' language.
Be very wary of css positioning that relies on things always remaining the same size. If those things contain text, they will not remain the same size, and you will then need to go back and fix your designs.
If you use character types in your sql tables, make sure that any of those that might receive international input are unicode (nchar, nvarchar, ntext). For that matter, I would just standardize on using the unicode versions.
If you're building SQL queries dynamically, make sure that you include the N prefix before any quoted text if there's any chance that text might be unicode. If you end up putting garbage in a SQL table, check to see if that's there.
Make sure that all your web pages definitively state that they are in a unicode format. See Joel's article, mentioned above.
You're going to be using resource files a lot for this project. That's good - ASP.NET 2.0 has great support for such. You'll want to look into the App_LocalResources and App_GlobalResources folder as well as GetLocalResourceObject, GetGlobalResourceObject, and the concept of meta:resourceKey. Chapter 30 of Professional ASP.NET 2.0 has some great content regarding that. The 3.5 version of the book may well have good content there as well, but I don't own it.
Think about fonts. Many of the standard fonts you might want to use aren't unicode capable. I've always had luck with Arial Unicode MS, MS Gothic, MS Mincho. I'm not sure about how cross-platform these are, though. Also, note that not all fonts support all of the Unicode character definition. Again, test, test, test.
Start thinking now about how you're going to get translations into this system. Go talk to whoever is your translation vendor about how they want data passed back and forth for translation. Think about the fact that, through your local resource files, you will likely be repeating some commonly used strings through the system. Do you normalize those into global resource files, or do you have some sort of database layer where only one copy of each text used is generated. In our recent project, we used resource files which were generated from a database table that contained all the translations and the original, english version of the resource files.
Test. Generally speaking I will test in German, Polish, and an Asian language (Japanese, Chinese, Korean). German and Polish are wordy and nearly guaranteed to stretch text areas, Asian languages use an entirely different set of characters which tests your unicode support.
Learn about the System.Globalization namespace:
System.Globalization
Also, a good book is NET Internationalization: The Developer's Guide to Building Global Windows and Web Applications
Would be good to refresh a bit on Unicodes if you are targeting other cultures,languages.
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
This is a hard problem. I live in Canada, so multilingualism is a big issue. In all my years of doing software development, I've never seen a solution that I liked. I've seen a lot of solutions that worked, and got the job done, but they've always felt like a big kludge. I would go with #harriyott, and make sure that none of your strings are actually in code. A resource file works well for desktop applications. However in ASP.Net, I'd recommend using the database. #John Christensen also has some good pointers.
Make sure you're compiling with Code Analysis turned on, and pay attention to the Globalization warnings that it gives you. Keep data in an invariant format (CultureInfo.InvariantCulture) until you display it to the user (then use CultureInfo.CurrentCulture).
I would seriously consider reading the following code project article:
Globalization and localization demystified in ASP.NET 2.0
It covers everything from Cultures and Locales, setting the threads current culture, resource files, encodings, you name it!
And of course it's loaded with pretty pictures and examples :-). Good luck!
I would suggest:
Put all strings in either the database or resource files.
Allow extra space for translated text, as some (e.g. German) are wordier.

Resources