I am trying to optimize my page by replacing image urls with DATA-URI but the images are not rendering after converting to DATA URI.
Here is my code to converting normal urls to data uri:
$imgurl= "https://www.cashy.in/images/banners/0ad08aafdd0887ed79f9fcc4321d54ed.png";
$type=substr($imgurl, -3);
$newimg=base64_encode($imgurl);
$o_img="data:image/".$type.";base64,".$newimg;
As discussed above, you don't encode the URL of the image itself, you have to encode the actual data.
As such, you should use something like the following code:
$imgurl= "https://www.cashy.in/images/banners/0ad08aafdd0887ed79f9fcc4321d54ed.png";
$type=substr($imgurl, -3);
$newimg=base64_encode(file_get_contents($imgurl));
$o_img="data:image/".$type.";base64,".$newimg;
However, when doing this you need to understand that you are increasing the size of your generated HTML by the size of the image (plus the 33% overhead inherent in base64 encoding). Only do this when the image itself is very small and the overheads of an extra HTTP request outweigh the extra download required.
Related
With the HTTP header Range clients can request only a certain range of bytes from a server.
GET myfile.jpg HTTP/1.1
"http://myhost"
Range=bytes=1000-1200
If the server supports this response feature and maybe even shows that by a Accept-Range header, the above request will return only the 200 bytes from byte 1000 onwards.
Is it possible to get usable parts from an JPG image with this method? Say the actual JPG measures 800x1197 pixels. What would have to be done in order to request only a sub image between the pixels 200x200 and 400x400?
To me it looks like it's only possible receive horizontally cut slices of the image. But this would already be better than getting the full image file. So in the example above I'd say one could try to download: the slice from 200 (y-axis) to 400 (y-axis) and then crop the result on the client side accordingly.
Assume we already know the content-length of the file as well as its actual image size, which may have been determined by a preceding HTTP request:
content length in bytes: 88073
jpg size: 800x1197
Which byte range would I have to request for this image? I assume that JPG has some meta data, which has to be taken in account as well. Or does the compression of jpg render this attempt impossible? It would be ok if the final cut out does not contain any metadata from the original.
But still it might be necessary to have an initial request, which takes some bytes from the beginning hoping to fetch the metadata. and based on this the actual byte range might be determined.
Would be very nice if someone could give me a hint how to approach this.
JPEG encodes compressed data in one or more scans. The scans do not indicate their length. You have to actually decode to get to the end of the scan. The scans span the entire image.
If the JPEG stream is progressively encoded you can read the stream blocks at at a time, decode the scans, update the output image, and get successively refined views of the image.
I am trying to cache rendered animations to the apple watch (these are generated at run time). I have saved the frames of each animation as JPEG #1x with compression of 0.1. The sum of all the frames is less then 1.2 MB. I clear the cache before I start caching. However only about half the animations are cached. The documentation says that the cache is 5MB. What am I doing wrong?
If you want to send image data to the Watch programmatically (i.e. not at compile time), WKInterfaceDevice provides two methods:
addCachedImage:name: accepts a UIImage, encodes it as PNG image data, and transmits it to the cache. So, if you create a UIImage from JPEG data, you are actually decoding the JPEG data into an image, then re-encoding it as PNG before it's sent to the cache (thereby negating the effects of JPEG-encoding in the first place).
addCachedImageWithData:name: accepts NSData and transmits the unaltered data directly to the cache. So, if you encode your image to NSData using UIImageJpegRepresentation and pass it to this method, you'll transmit and store less in the cache. I use this technique for all of my images, unless I need the benefits of a PNG image; in that case, I actually encode my own NSData using UIImagePngRepresentation and send it using this method.
For debugging purposes, it's helpful to use the [[WKInterfaceDevice currentDevice] cachedImages] dictionary to find the size of the cached image data. The dictionary returns a NSNumber with the size (in bytes) of the cache entry.
I just discovered that if you use this line of code:
[self.image setImageNamed:#"number"]
Your images should be named:
number1.png
number2.png
number3.png
number4.png
I was running into a similar error when I had my images named:
number001.png
number002.png
number003.png
number004.png
In HTTP there are two ways to POST data: application/x-www-form-urlencoded and multipart/form-data. I understand that most browsers are only able to upload files if multipart/form-data is used. Is there any additional guidance when to use one of the encoding types in an API context (no browser involved)? This might e.g. be based on:
data size
existence of non-ASCII characters
existence on (unencoded) binary data
the need to transfer additional data (like filename)
I basically found no formal guidance on the web regarding the use of the different content-types so far.
TL;DR
Summary; if you have binary (non-alphanumeric) data (or a significantly sized payload) to transmit, use multipart/form-data. Otherwise, use application/x-www-form-urlencoded.
The MIME types you mention are the two Content-Type headers for HTTP POST requests that user-agents (browsers) must support. The purpose of both of those types of requests is to send a list of name/value pairs to the server. Depending on the type and amount of data being transmitted, one of the methods will be more efficient than the other. To understand why, you have to look at what each is doing under the covers.
For application/x-www-form-urlencoded, the body of the HTTP message sent to the server is essentially one giant query string -- name/value pairs are separated by the ampersand (&), and names are separated from values by the equals symbol (=). An example of this would be:
MyVariableOne=ValueOne&MyVariableTwo=ValueTwo
According to the specification:
[Reserved and] non-alphanumeric characters are replaced by `%HH', a percent sign and two hexadecimal digits representing the ASCII code of the character
That means that for each non-alphanumeric byte that exists in one of our values, it's going to take three bytes to represent it. For large binary files, tripling the payload is going to be highly inefficient.
That's where multipart/form-data comes in. With this method of transmitting name/value pairs, each pair is represented as a "part" in a MIME message (as described by other answers). Parts are separated by a particular string boundary (chosen specifically so that this boundary string does not occur in any of the "value" payloads). Each part has its own set of MIME headers like Content-Type, and particularly Content-Disposition, which can give each part its "name." The value piece of each name/value pair is the payload of each part of the MIME message. The MIME spec gives us more options when representing the value payload -- we can choose a more efficient encoding of binary data to save bandwidth (e.g. base 64 or even raw binary).
Why not use multipart/form-data all the time? For short alphanumeric values (like most web forms), the overhead of adding all of the MIME headers is going to significantly outweigh any savings from more efficient binary encoding.
READ AT LEAST THE FIRST PARA HERE!
I know this is 3 years too late, but Matt's (accepted) answer is incomplete and will eventually get you into trouble. The key here is that, if you choose to use multipart/form-data, the boundary must not appear in the file data that the server eventually receives.
This is not a problem for application/x-www-form-urlencoded, because there is no boundary. x-www-form-urlencoded can also always handle binary data, by the simple expedient of turning one arbitrary byte into three 7BIT bytes. Inefficient, but it works (and note that the comment about not being able to send filenames as well as binary data is incorrect; you just send it as another key/value pair).
The problem with multipart/form-data is that the boundary separator must not be present in the file data (see RFC 2388; section 5.2 also includes a rather lame excuse for not having a proper aggregate MIME type that avoids this problem).
So, at first sight, multipart/form-data is of no value whatsoever in any file upload, binary or otherwise. If you don't choose your boundary correctly, then you will eventually have a problem, whether you're sending plain text or raw binary - the server will find a boundary in the wrong place, and your file will be truncated, or the POST will fail.
The key is to choose an encoding and a boundary such that your selected boundary characters cannot appear in the encoded output. One simple solution is to use base64 (do not use raw binary). In base64 3 arbitrary bytes are encoded into four 7-bit characters, where the output character set is [A-Za-z0-9+/=] (i.e. alphanumerics, '+', '/' or '='). = is a special case, and may only appear at the end of the encoded output, as a single = or a double ==. Now, choose your boundary as a 7-bit ASCII string which cannot appear in base64 output. Many choices you see on the net fail this test - the MDN forms docs, for example, use "blob" as a boundary when sending binary data - not good. However, something like "!blob!" will never appear in base64 output.
I don't think HTTP is limited to POST in multipart or x-www-form-urlencoded. The Content-Type Header is orthogonal to the HTTP POST method (you can fill MIME type which suits you). This is also the case for typical HTML representation based webapps (e.g. json payload became very popular for transmitting payload for ajax requests).
Regarding Restful API over HTTP the most popular content-types I came in touch with are application/xml and application/json.
application/xml:
data-size: XML very verbose, but usually not an issue when using compression and thinking that the write access case (e.g. through POST or PUT) is much more rare as read-access (in many cases it is <3% of all traffic). Rarely there where cases where I had to optimize the write performance
existence of non-ascii chars: you can use utf-8 as encoding in XML
existence of binary data: would need to use base64 encoding
filename data: you can encapsulate this inside field in XML
application/json
data-size: more compact less that XML, still text, but you can compress
non-ascii chars: json is utf-8
binary data: base64 (also see json-binary-question)
filename data: encapsulate as own field-section inside json
binary data as own resource
I would try to represent binary data as own asset/resource. It adds another call but decouples stuff better. Example images:
POST /images
Content-type: multipart/mixed; boundary="xxxx"
... multipart data
201 Created
Location: http://imageserver.org/../foo.jpg
In later resources you could simply inline the binary resource as link:
<main-resource>
...
<link href="http://imageserver.org/../foo.jpg"/>
</main-resource>
I agree with much that Manuel has said. In fact, his comments refer to this url...
http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4
... which states:
The content type
"application/x-www-form-urlencoded" is
inefficient for sending large
quantities of binary data or text
containing non-ASCII characters. The
content type "multipart/form-data"
should be used for submitting forms
that contain files, non-ASCII data,
and binary data.
However, for me it would come down to tool/framework support.
What tools and frameworks do you
expect your API users to be building
their apps with?
Do they have
frameworks or components they can use
that favour one method over the
other?
If you get a clear idea of your users, and how they'll make use of your API, then that will help you decide. If you make the upload of files hard for your API users then they'll move away, of you'll spend a lot of time on supporting them.
Secondary to this would be the tool support YOU have for writing your API and how easy it is for your to accommodate one upload mechanism over the other.
Just a little hint from my side for uploading HTML5 canvas image data:
I am working on a project for a print-shop and had some problems due to uploading images to the server that came from an HTML5 canvas element. I was struggling for at least an hour and I did not get it to save the image correctly on my server.
Once I set the
contentType option of my jQuery ajax call to application/x-www-form-urlencoded everything went the right way and the base64-encoded data was interpreted correctly and successfully saved as an image.
Maybe that helps someone!
If you need to use Content-Type=x-www-urlencoded-form then DO NOT use FormDataCollection as parameter: In asp.net Core 2+ FormDataCollection has no default constructors which is required by Formatters. Use IFormCollection instead:
public IActionResult Search([FromForm]IFormCollection type)
{
return Ok();
}
In my case the issue was that the response contentType was application/x-www-form-urlencoded but actually it contained a JSON as the body of the request. Django when we access request.data in Django it cannot properly converted it so access request.body.
Refer this answer for better understanding:
Exception: You cannot access body after reading from request's data stream
I've heard a lot about the importance of using sprites in order to get your request count down. But my thinking is that rather than use sprites, you can use URIs to accomplish the same thing, and much easier (no sprite creation needed).
Is it better to use sprites or uris?
Base64-encoded data is about 1/3 larger than raw bytes, so on pages where downloading all the image data takes more than three times as long as making a request, CSS sprites are superior from a performance basis.
Also, inline data URIs make the file itself take as long to load as the actual data plus the base64-encoded images. If the data URIs are on your actual HTML page, that means rendering stops and waits for the image to load. If the data URIs are in your stylesheet, that means any rules after the data URI have to wait for it before they can be processed. On the other hand, with a sprite file, the images can load concurrently with your other resources. That may be worth the cost of one extra request, especially when you factor in the base64 penalty.
I suppose that support for IE5, 6 and 7 would be a good reason to use sprites over URIs, if those targets are important to you.
how do people view encrypted pictures like on this wiki page? is there a special program to do it, or did someone decide to do some silly xor just make a point about ECB? im not a graphics person, so if there are programs to view encrypted pictures, what are they?
Encryption works on a stream of bytes. That is, it takes an array of bytes and outputs another array of bytes. Images are also just an array of bytes. We assign the "r" component of the top-left pixel to the first byte, the "g" component to the second byte, the "b" component to the third byte. The "r" component of the pixel next to that is the fourth byte and so on.
So to "encrypt" an image, you just take a byte array of the pixels in the first image, encrypt it (encryption usually doesn't change the number of bytes - apart from padding) and use those encrypted bytes as the pixel data for the second image.
Note that this is different from encrypting an entire image file. Usually an image file has a specific header (e.g. the JPEG header, etc). If you encrypted the whole file then the header would also be included and you wouldn't be able to "display" the image without decrypting the whole thing.
To view an encrypted image, the image has to be an uncompressed image format for example bmp.
PNG, JPEG and so on are compressed images so you wont't be able to display those. Also the imgae header has to be uncompressed.
If you want to encrypt pictures like this, just convert it to an uncompressed format, open it with an hex editor and save the image header. After that u can encrypt the image with AES/ECB.
At last you have to insert the original image header. Now you should be able to view the encrypted image.
It's not just a silly XOR (they can all use XOR) but yes, it's just there to emphasize that any scheme which converts the same input to the same output every time makes it easy to spot patterns that were present in the input. The image is there to show how easily we can spot Tux in the "encrypted" output. The author could have used any kind of data, but used an image because the human eye is very good at spotting patterns, so it makes a good example.
As the article says, better schemes use the output of the previous block to "randomize" the next block, so you can't see patterns in the output (a la the image on the right).