Is it possible to combine 2 images together into one on watchOS? - watchkit

I am upgrading my watch app from the first version of watchOS. In my first version I was placing UIImageViews on top of each other and then rendering them with UIImagePNGRepresentation() and then converting it to NSData and transferring it across to the watch. As we know there are limited layout options on the apple watch so if you want cool blur effects behind images or images on images they have to be flattened off screen.
Now when I re-created my targets to watchOS2 etc suddenly the images transferred via NSData through [[WKSession defaultSession] sendMessage:replyHandler:] come up with an error saying its too large of a payload!
So as far as I can see I either have to work out how to combine images strictly via watchkit libs or use the transferFile option on the WKSession and still render them on the iPhone. The transferFile option sounds really slow and clumsy since I will have to render the file, save to disk on iPhone, transfer to watch, load into something that I can set on a WK component.
Does anyone know how to merge images on the watch? QuartzCore doesn't seem to be available as a dependency in watch land.

Instead of sendMessage, use transferFile. Note that the actual transfer will happen in a background thread at a time that the system determines to be best.
Sorry, but I have no experience with manipulating images on the watch.

Related

"Unable to find image named XXX on watch" when I use image cache

I use - (BOOL)addCachedImage:(UIImage *)image name:(NSString *)name API of WKInterfaceDevice to store images.
The issue is that most of the time, when I try to use those cached image by using setImageNamed: on WKInterfaceImage, I get this Unable to find image named XXX on watch error that results in an empty image on screen.
I insist on the fact that this does NOT happen all the time.
This occurs both on simulator and on device.
Go to Assets.xcassets in the Project Navigator on the left sidebar in XCode.
There choose the image that is not found.
For me it worked as soon as I had the image placed in the "2x" placeholder.
If you just import the images they are placed in the 1x spot, but the Apple Watches seem to need at least the 2x resolution. Just optimize your images and it will all work seamlessly.
I figured out that image names that are too long (character count > 255) lead to the bug. As soon as the image name used to cache the image is shorter, I don't get the error anymore.
Seems like an Xcode bug, I am facing this issue in Xcode 7 beta 4 but not in beta 2. I researched and experimented many things including various suggestions, and it turns out that if the images are set at Universal they are not getting picked up (http://iphone.tagsstack.com/unable_to_find_image_named_ldquo_xxrdquo_on_iwatch_error). However, if you select them separately for each of the watch size, they will show up and the issue is resolved.

Can we used the iOS technologies in apple watch app?

I want to create music app in which the watch extension app shows audio wave so my question is Can we used the iOS technologies like openGL in watch app?
You can't run any code on the watch. You can only run code in a Watch extension in your iOS app and update a relatively static UI on the watch. You could generate images in your extension for the audio wave, put them together into an animation and then update the UI with that.
It would be possible to pass some information from your iOS app to the Watch extension running on the phone, which could then update a pre-defined interface on the Watch app. However, if you are wanting to provide a real-time audio waveform, I think this could face major problems regarding latency.
Note that as Stephen Johnson states, you could only do this by rendering static images which would then be sent to the watch for display, or by having pre-installed images in your watch interface that you rapidly show or hide to give the impression of levels changing. The latter would be a much more promising approach from a latency approach, and given Apple give a demonstration of a circular progress indicator made up of 360 images, perhaps it would appear to animate smoothly even. However, the key question would be whether the peaks would appear on the Watch screen close enough to when they actually occurred in the music that the user would see them as being linked.
It might be possible to pre-process the audio and build in a delay to both the display of the peaks and the audio playback to manage the communication latency—but testing that would really only be possible once you had Watch hardware in your hand.

How can I seed the iOS simulator photo albums?

I've got a suite of KIF tests for our app, but one part that I can't work out how to cover is where we use UIImagePickerControllers. Obviously I can't check the camera, but I'd like to write a scenario where the user chooses an image from their library. I know that there's +[KIFTestStep stepsToChoosePhotoInAlbum:atRow:column:], but what I don't know is how to set it up it so that there's a consistent set of images for the test to choose from. How do I seed the simulator's photo albums?
There are 2 different ways (one involves programming) to populate the Photo Library of the iOS Simulator:
Open Safari in the iOS Simulator, search for some large sizes images in Google, open one and display it in full size. Then do a long press on the photo and choose save. Repeat this with several photos to fill up the library.
Create a folder on your Mac with the images you want to populate the Photo-Library with. Then write a small iOS application, that iterates over that directory and creates NSData objects from each photo file. Save the NSData object to the Photo-Library using the
(void)writeImageDataToSavedPhotosAlbum:(NSData *)imageData metadata:(NSDictionary *)metadata completionBlock:(ALAssetsLibraryWriteImageCompletionBlock)completionBlock
method of AssetsLibrary.
Here's a project that's working for me:
https://github.com/justin/PopulateSimulatorPhotos
It has proven very good in those cases that you need to reset the simulator again and again or want to test quickly on all the device types.

Is there an ready solution to just send part of interlaced JPEG depending on the browser resolution?

I'm asking if you know if there is a ready-made solution, not really how to do it.
I'm quite sure I can pull it off myself, even if I never ever touched the bytes of a JPEG manually. If you'd like a crack on it, you're invited to do so ;)
The basic Idea is that you have a site with a few JPEG images, but you want to reduce load as much as possible for mobile users.
So you ensure that all of your JPEG´s are progressive and only sends the low-frequency bits of it first, idles down the TCP-connection, and waits for the client to report in how big the available space is in the browser window.
Or alternatively, you have some sort of browsercaps.ini or similar, and rely on that to get the initial resolution -- and then have the reporter report a correction if necessary.
I actually needs this for two entirely separate environments, one is using PHP and the other is using node.js (The latter one is of more importance).
I'm quite sure picasaweb is doing this stuff already, or at least did. You could view an image, and it loads progressively -- then you could enlarge it, it got blocky but continued to load in progressively, I remember that I was quite impressed by that!
(And its unfair that Google keep the cool stuff for them selves, remember their motto {°«°] )
Why not send the client a list of images that could be used for a specific img tag, then have the client determine which one it should use?
It is possible to determine the screen size of the device document.write(screen.width+'x'+screen.height);or the size of the browser. And instead of adding a src attribute for each image, adding the possible sources to a html5 data- attribute like so:
<img data-img="mobile:some-img.jpg,desktop:other-img.jpg" />
JavaScript (With jQuery):
$('img').each(function(){
$(this).attr('src', $(this).attr('data-img').split(',')[0].split(':')[1]);
});

Why do browsers not have a file upload progress bar?

I wonder why no browser out there has such simple but essential feature. Am I missing something? Is there a technical reason?
I'm tired of all those javascript/flash/java hacks out there ...
There is no technical reason preventing the browser from calculating the total bytes to be sent and then tracking how many have been received by the server (Thanks, Kibbee for your comment). Firefox had a functional upload progress indicator until version 0.9, but that build broke it in 2004.
Reading through the Bugzilla updates, it seems that this feature doesn't seem to benefit enough users to get any traction from the developers.
Users who regularly upload very large files tend to use tools like FTP that are designed for this purpose, so they are not affected.
Adding to flamingLogos argument, you might operate behind a proxy which takes your five megabytes of pure goodness within a second, and then sends it off to the server over a 56kbit modem.
I perceive a wrong progress bar slightly worse than no progress bar at all, and there would be many people for who it would be wrong all of the time.
Yes, it's silly, and for some reason browser makers are ignoring it.
I would strongly dispute that large file users use FTP - hardly anyone knows about that anymore and all the common Web apps require HTTP uploads for video, audio and pictures (e.g. youtube).
Ironic that user participation and media is the key to Web 2.0, yet the main mechanism for user participation is so poorly handled by browsers.
For Firefox there have been bugs languishing for years, such as for a better upload progress display:
https://bugzilla.mozilla.org/show_bug.cgi?id=243468
Get voting! :)
The existing progress bar in the status bar is broken for years - see bug 249338 - and it will let you silently abort an upload - see bug 432768.
If you are using Firefox, you can use the new UploadProgress add-on https://addons.mozilla.org/en-US/firefox/addon/221510/ designed for this purpose, that is displaying the progress of your uploads and an estimated remaining time.
You have to post back to upload a file, regardless of whether or not you are being "sneaky" about it (using hidden iframes, for example); the browser's own progress bar (usually down in the status bar) is the file upload progress bar in that sense, although not exactly.
It's just that you can't easily use that data for yourself, so you have to approximate it with a lot of client-to-server communication tricks.
There's no real technical reason you couldn't have a reasonable progress indicator as you do with downloads. You should suggest it as a feature request to your favorite browser.
That said, I think the main reason there are so many javascript/flash/ajax-based upload components isn't so much to provide progress bars (though that's a nice bonus). It's usually because they want to provide a better UI for selecting the data to be uploaded and to sometimes manipulate the data before uploading. The basic file upload feature that's in the HTML specs results in the "Browse..." button that pops up a file open dialog and uploads the raw file data as is to the server.
Chrome has an upload bar that shows the % of loading.
Or, like Peuchele says, there's also an Addon for Firefox.
The web browser has always been that, a browser of the web. It is a mechanism for consumption. Our ability to upload information through the same portal is somewhat of a hack.

Resources