Is there still documentation or support for https://api.projectoxford.ai/vision/v1? - microsoft-cognitive

I am trying to find more information, but I cannot get access to the API portal page https://www.projectoxford.ai/vision . Do you know if Microsoft is planning to remove the API?

ProjectOxford was deprecated in favor of Cognitive Services
The Vision API have the following main paths
Computer Vision
Analyze content in images and video.
Custom Vision
Customize image recognition to fit your business needs.
Face
Detect and identify people and emotions in images.
Form Recognizer
Extract text, key-value pairs, and tables from documents.
Video Indexer
Analyze the visual and audio channels of a video, and index its content.
DEV API reference of all services ➡ https://eastus.dev.cognitive.microsoft.com/docs/services/

Related

Azure Cognitive Custom Vision

I would like to use the training, prediction etc for my custom vision project with REST API. I do not see any place where the GET, POST methods are clearly mentioned with JSON. Is there any reference documentation?
The training APIs are listed here.
The prediction APIs are listed here.
You can download Swagger/WSDL files from those links, if that's what you prefer.
Both of those links are reachable through the Overview page.

How is Branch app indexing different from Firebase app indexing?

I wanted to know which one is easier to implement. In the branch app indexing method is it required to implement app content sitemaps?
Full disclosure: I'm the Branch.io team
The way Firebase and Branch implement app indexing is fairly similar. In fact, Branch uses exactly the same methods for indexing as Firebase does, and adds some additional functionality on top. Branch acts as a wrapper for your own website, or as your full hosted website from the perspective of Firebase. So, when it comes to indexing with Google, you index a Branch link whereas Firebase requires you to submit your own site.
From the perspective of a developer, assuming the only thing you're trying to do is app indexing, Branch is slightly simpler to use and gives you rich analytics about the traffic from this channel but neither one is a lot of work. However, both platforms also provide other features that may sway your decision. If you're doing any sort of content sharing (i.e., your users create links to post on social media), Branch gives you app indexing basically 'for free' in the same library, whereas Firebase would require you to implement both features separately.
Both tools are free to use.
Firebase
Offers a lot of features (of which app indexing is just one), all implemented to a 'fairly good' level. This makes the Firebase platform an attractive choice for a small, new app that needs a lot of basic infrastructure and doesn't necessarily plan to require advanced functionality later on.
On Firebase, App Indexing for Android apps is implemented via integrating the Firebase App Indexing SDK and making a verified link between your website and your app (usually via Digital Asset Links or the Google Search Console). The 'Firebase App Indexing' SDK is actually just Google's old App Indexing SDK that's been rebranded and repackaged in a peculiar way.
You then register content items inside your app using the SDK and cross your fingers in hopes that Google will index them — there's no feedback on the process. App Indexing for iOS apps is based on crawling URLs that have been enabled for Apple's Universal Links. There is a Firebase App Indexing SDK for iOS, but to be honest I have no idea what it does. We've never seen any benefit or change to indexing behavior on iOS when it's integrated. On both platforms, you need to already have a live website, because every piece of content inside your app must also correspond to a specific URL on your site.
Branch
A best-in-class, enterprise-grade tool for growth attribution and content sharing, used by many of top apps like Pinterest, Airbnb, Jet.com, etc.
Branch is based around the concept of a single link that works everywhere, on all platforms, and intelligently redirects to the appropriate destination. Every time your users share content or view a piece of content in your app, that action generates a link. Since Google's search index is really just a huge collection of links, this is a perfect match.
On both Android and iOS, Branch de-dupes your app's links for any that point to the same content, packages up the result into an 'app content sitemap' (you don't have to do this yourself if you're using Branch links — it's automatic as soon as you enable the feature) and ships that sitemap file over to Google. In addition, since your links are hosted by Branch, there is no need for you to have an existing website, and you also get access to things like iOS Spotlight Indexing. Branch is compatible with iOS Universal Links by default, and we take care of verifying the connection between your web content and your app. We also monitor the links so we can give you feedback on if/when Google decides to index your content, and so that you can pull out reports on traffic that comes in through app indexed links.
On Android, in addition to the approach above, the Branch SDK helps you to identify pieces of content inside your app and submit them to Google for indexing. This is exactly the same approach as Firebase uses, except since the traffic still goes through a Branch link, you get additional data for attribution and analytics.
Feel free to read the full Branch Google App Indexing integration guide for more details!
Of course, implied in all of this is the assumption Google actually cares about your content enough to display it in search results. They seem to be getting better about this, but at the moment it's still very much a black box without much feedback to you as the developer. At Branch, we're trying to provide as much insight into the process as we can, so at least if your content isn't being indexed by Google you'll know that instead of being left wondering.

Is Here Maps API suitable for our needs?

We are planning to develop a new application that should offer:
Android-capable
Turn by turn with voice navigation
Offline maps (and perhaps routing?)
Satellite maps
Truck issues
As fas as I can see, all of the requirements (but the offline routing) is included in different Here Maps developer plans. Nevertheless, I still have some questions:
On their web (https://developer.here.com/plans/api/consumer-mapping), there are two main divisions (API plan and Mobile SDS plans). Which one is better for me and what is the difference?. I mean, it seems clear that I should go for the mobile plans, but not sure if this will be limiting my development in the future.
There appear no pricing options for the Mobile SDKs. We are planning to make the app available to our customers on a free basis and they will be charged for enhanced services. But seeing that API plans are based on a volume basis... how does the mobile plans work? (does it have any cost depending on the number of transactions too?).
Finally, customized POI are the main advantage of our app and is closed to other users (will no be made publicly available). Does the Here api include the option to add our POIs coming from another (ous) database on the fly?.
Thanks in advance,
Jose.
Turn by turn guidance will be only available via the (Premium) MobileSDK. Via REST APIs you can get routing, but not TbT voice guidance. Also Offline is only avaiulable via the Premium MobileSDK. Beside this, the native MobileSDK offers native vectorbased map rendering, when you use the REST APIs you would need to use the raster tiles. So in a nutshell: if you target Mobiles, you should definitely go with the MobileSDK. If you need any feature that's only available via web APIs (platform extensions, isoline routing, and some more), you can still combine these web APIs with the MobileSDK.
Pricing depends on your usecases, so you should discuss your usecase with HERe Sales: https://developer.here.com/contact-us?interest=mobile-sdk#contact-sales
Customized POIs is quite general, but of course you can load datasets from your servers and show them as POIs on the map, but you could also use the Platform Extension CLE, that also allows you to search within your dataset and is seamless integrated in the MobileSDK already.

Do I need to use Azure Media Services and Player for audio only application

I'm looking to build an ASP.NET application that will allow users to upload audio files (only) and playback the media through the site. I was planning to use Azure Blob Storage to hold the media but do I need to use Azure Media Services and Player for upload and playback?
Is using Azure Media Services and Player over the top for audio as all the examples I can find concentrate on Video.
Any help or advice gratefully received.
You do not need to use Azure Media Services or the AMP (Azure Media Player) for upload and playback.
The key benefit (for me, at least) of using AMP
AMP can recognize which platform (iOS, Android, browser, etc) your user is on, and then scaffolding code to distribute the appropriate video file (ex: HLS on iOS, Smooth Streaming on MSFT platforms, DASH where it is supported). Now, the media player will handle all of that for you.
Uploading video
You can do this programmatically, either through a web app or compiled app that you write, or through Visual Studio's built-in upload tool. You'll need the Azure SDK installed to do this.
Uploading directly to blob storage
You do not need to upload your video to Azure Media Services directly. Instead, you can upload to a blob storage account (this is what I do), and then have your Azure Media Services account point to your blob storage.
The reason I do this is because I can more easily organize and manage my files in there, either by naming my blob of the folders within it.
Supported input formats for Azure Media Services
Various video, audio, and image file types can be uploaded to a Media Services account, with there being no restriction on the types or formats of files that you can upload using the Media Services SDK. However, the Azure Management portal restricts uploads to the formats that are supported by the Azure Media Encoder.
Content encoded with the following video codecs may be imported into Media Services for processing by Azure Media Encoder:
H.264 (Baseline, Main, and High Profiles)
MPEG-1 MPEG-2 (Simple and
Main Profile)
MPEG-4 v2 (Simple Visual Profile and Advanced Simple
Profile)
VC-1 (Simple, Main, and Advanced Profiles)
Windows Media
Video (Simple, Main, and Advanced Profiles)
DV (DVC, DVHD, DVSD,
DVSL)
More information
I have more detailed information on all of this in a series of blog post tutorials on the subject. Let me know if there is anything more that I can do to clarify.

Embed Alfresco WebPreviewer in my own website

I have a Spring MVC application that connect with Alfresco using CMIS libraries, actually I can upload documents and download it but I need integrate Alfresco's WebPreviewer to preview documents in my app.
I found some code here but I don't know how to do it
It's hard to say for certain because of the limited amount of information that you've provided, but I think that the problem that you're going to be faced with when trying to use any of the existing previewer code is one of authentication. If you're using only using CMIS then you won't be able to use any of the WebScript based REST APIs that the Alfresco widgets will be using.
There are two possible previewer widgets that you can use - the older YUI2 based previewer (that you'd currently find in the document details page and the Document Library film strip view, and the newer Aikau component that you'd find in the faceted search previewer (from version 5.0 onwards).
I suspect that you won't be able to re-use either of these components without either authenticating against Alfresco in a way that allows you to access the WebScript based REST APIs or extending and customizing those widgets.
You've said that you have your own Spring MVC application, but you haven't said whether or not that is using the Surf extension - if it is, and you're using the authentication capabilities provided by Surf then you will be authenticated to use those REST APIs - as the Surf authentication provides access across all APIs (including CMIS) via a single authentication.
If you are able to access those APIs then you should be able to follow the steps outlined in both the form post and the blog posts in your own question and the previous answer, however - based on your question I suspect that you can't do that.
If you've not come across it, you might be interested in the Aikau archetype that builds a ready-made Alfresco client using Surf (see this link) and that tutorial also shows how to use the Aikau previewer (see here).
Because this is providing you with a Spring MVC client that is preconfigured to authenticate against Alfresco, you might be able to port your application to use it.
Otherwise, as I said earlier - chances are you'll need to extend the existing widgets to use the CMIS APIs to render the previewers. Again, Aikau is easier to extend that the old YUI2 widgets - but is reliant on Surf.

Resources