The old version of Google Translate - google-translate

I need the old version of Google Translate (the statistical model, the version before 2016) for my research,
I was wondering if there any way to access the old version?
Thanks

See the update below
Original answer
Yes, as of 2020, Google Translate statistical machine translation still available as phrase-based machine translation.
https://cloud.google.com/translate/docs/basic/translating-text
Using the model parameter
You can specify which model to use for translation by using the model query parameter. Specify base to use the PBMT model, and nmt to use the NMT model. If you specify the NMT model in your request and the requested language translation pair is not supported for the NMT model, then the PBMT model is used.
There are similar options for the Microsoft API and the undocumented Google APIs.
My guess is that there are no statistical systems available for newly added language pairs - a major advantage of massive multilingual models is not having to train or deploy separate systems for the long tail.
Update
No, Google ended statistical machine translation in August 2021.
https://cloud.google.com/translate/docs/release-notes#August_02_2021
August 02, 2021
changed
Removed the Phrase-Based Machine Translation (PBMT) model. For requests that specify the PBMT model, Cloud Translation uses the Neural Machine Translation (NMT) model instead.
https://cloud.google.com/translate/docs/basic/translating-text#model
Note: Translation previously offered a Phrase-Based Machine Translation (PBMT) model (also known as the base model). If you specify that model for translations, Translation uses the NMT model instead.

I doubt it is running somewhere. The statistical system is a complicated pipeline that is expensive to run and difficult to maintain.
You can try contacting someone from Google Research who works on MT (just have a look at papers on arXiv, the authors have contact emails there) if they can run it for you.
Alternatively, you can build your own Moses system, it is an open-source implementation of statistical MT, so the results should be similar to what was Google Translate (judging from the WMT competitions results before 2016).

You can try to check Google Translate in the Web Archive.

Related

How do I deploy an R code model in Salesforce Einstein Analytics Plus? Or find another alternative to Shiny?

tl;dr I want to deploy "live" model results in Python and R, and while Salesforce Einstein advertises this functionality for R and Python, I have only found support for Python. Shiny is too expensive to justify for our limited R-language requirements. Does Einstein R support actually exist?
UPDATE: Tableau has a separate solution from Einstein Analytics that hosts both R and Python - see answer below. Not a feature-rich direct competitor to Shiny, but that's not our use-case.
According to the documentation for Salesforce Einstein Analytics Plus (aka Tableau CRM AI Analytics), data scientists can upload (operationalize) their Python, R, and Matlab code, as described here:
https://www.tableau.com/solutions/ai-analytics (see the section on "Data Science" at the bottom of the page).
I signed up for a trial of Einstein Analytics Plus, and found a link to the "Model Manager." Using Model Manager to deploy Python-language models is well-documented here:
https://help.salesforce.com/s/articleView?id=sf.bi_edd_model_upload_prepare.htm&type=5
For Python, this seems to match the advertised functionality. But there is no indication of how to deploy R language models, which may be part of my team's use case.
I would like to find the equivalent method for deploying an R-language model in Einstein. Particularly, is there some other Salesforce / Tableau product I should try, or is this a feature that is simply not available in the trial version. Unlike Python deployment, searching the documentation has not yielded answers.
Alternatively, we're only interested in Einstein R support is because it appears to be about 1/10 the cost of Shiny, which is hideously expensive. So any recommendations regarding lightweight alternatives to Shiny would also be helpful.
TIA for anyone who can shine a light on this problem.
ANSWER: There is actually a separate feature in Tableau that is different from Einstein Analytics which supports both R and Python, documentation here:
https://help.tableau.com/current/prep/en-us/prep_scripts.htm

Google-analytics framework for predictive analysis

I'm trying to use the google-analytics framework to create predictive analysis tools. For example I would like to cluster my webpage visitors, etc.
In general, is there any list of machine learning algorithms implemented by this framework? for example: regression, clustering, classification, feature selection, etc.
Thank you for any help
Depending upon your language of choice, you might want to export your Google Analytics Metrics to flat files or a database and then start experimenting with ML models. Two popular languages with stable ML Implementations are Python and R. R's caret package includes tools for building a predictive model pipeline. Python's scikit-learn also contains implementations of all major classes of ML algorithms.
When you say GA framework I'll assume you're referring to the set of Google Analytics APIs listed here. The framework by itself doesn't provide machine learning capabilities. It merely provides access to the processed and aggregated GA data stored in Google's servers. You can use the API and feed the data to a machine learning application/system/program that does all of the stuff you mentioned.

Options for deploying R models in production

There doesn't seem to be too many options for deploying predictive models in production which is surprising given the explosion in Big Data.
I understand that the open-source PMML can be used to export models as an XML specification. This can then be used for in-database scoring/prediction. However it seems that to make this work you need to use the PMML plugin by Zementis which means the solution is not truly open source. Is there an easier open way to map PMML to SQL for scoring?
Another option would be to use JSON instead of XML to output model predictions. But in this case, where would the R model sit? I'm assuming it would always need to be mapped to SQL...unless the R model could sit on the same server as the data and then run against that incoming data using an R script?
Any other options out there?
The following is a list of the alternatives that I have found so far to deploy an R model in production. Please note that the workflow to use these products varies significantly between each other, but they are all somehow oriented to facilitate the process of exposing a trained R model as a service:
openCPU
AzureML
DeployR
yhat (already mentioned by #Ramnath)
Domino
Sense.io
The answer really depends on what your production environment is.
If your "big data" are on Hadoop, you can try this relatively new open source PMML "scoring engine" called Pattern.
Otherwise you have no choice (short of writing custom model-specific code) but to run R on your server. You would use save to save your fitted models in .RData files and then load and run corresponding predict on the server. (That is bound to be slow but you can always try and throw more hardware at it.)
How you do that really depends on your platform. Usually there is a way to add "custom" functions written in R. The term is UDF (user-defined function). In Hadoop you can add such functions to Pig (e.g. https://github.com/cd-wood/pigaddons) or you can use RHadoop to write simple map-reduce code that would load the model and call predict in R. If your data are in Hive, you can use Hive TRANSFORM to call external R script.
There are also vendor-specific ways to add functions written in R to various SQL databases. Again look for UDF in the documentation. For instance, PostgreSQL has PL/R.
You can create RESTful APIs for your R scripts using plumber (https://github.com/trestletech/plumber).
I wrote a blog post about it (http://www.knowru.com/blog/how-create-restful-api-for-machine-learning-credit-model-in-r/) using deploying credit models as an example.
In general, I do not recommend PMML because the packages you used might not support translation to PMML.
A common practice is scoring a new/updated dataset in R and moving only the results (IDs, scores, probabilities, other necessary fields) into the production environment/data warehouse.
I know this has its limitations (infrequent refreshes, reliance upon IT, data set size/computing power restrictions) and may not be the cutting edge answer many (of your bosses) are looking for; but for many use-cases this works well (and is cost friendly!).
It’s been a few years since the question was originally asked.
For rapid prototyping I would argue the easiest approach currently is to use the Jupyter Kernel Gateway. This allows you to add REST endpoints to any cell in your Jupyter notebook. This works for both R and Python, depending on the kernel you’re using.
This means you can easily call any R or Python code through a web interface. When used in conjunction with Docker it lends itself to a microservices approach to deploying and scaling your application.
Here’s an article that takes you from start to finish to quickly set up your Jupyter Notebook with the Jupyter Kernel Gateway.
Learn to Build Machine Learning Services, Prototype Real Applications, and Deploy your Work to Users
For moving solutions to production the leading approach in 2019 is to use Kubeflow. Kubeflow was created and is maintained by Google, and makes "scaling machine learning (ML) models and deploying them to production as simple as possible."
From their website:
You adapt the configuration to choose the platforms and services that you want to use for each stage of the ML workflow: data preparation, model training, prediction serving, and service management.
You can choose to deploy your workloads locally or to a cloud environment.
Elise from Yhat here.
Like #Ramnath and #leo9r mentioned, our software allows you to put any R (or Python, for that matter) model directly into production via REST API endpoints.
We handle real-time or batch, as well as all of the model testing and versioning + systems management associated with the process.
This case study we co-authored with VIA SMS might be useful if you're thinking about how to get R models into production (their data sci team was recoding into PHP prior to using Yhat).
Cheers!

Is there a tool to determine which Cognos 8.x models are being used?

We have a Cognos 8.x installation with hundreds of reports and dozens of models. We believe that many of the models are not currently in use on any reports and want to remove those models. Are there any tools that can be run against Cognos to list which reports are using which model?
Take a look at motioPI... its a 3rd party app built using the cognos sdk. You run it against one of your dispatchers and it proves quite handy for these tasks.
http://www.inmotio.com/investigator/home.do
not to mention its free.
There's an audit package that comes with Cognos installations that you can deploy to log user activity. This will help you understand usage and determine any unused models.
Under a normal installation it is located at c8_location/webcontent/samples/Models/Audit/Audit.cpf
With this package you can, amongst other things, list reports that are used, by package. All the while, you're also setting up an auditing tool.
You can refer to your Administration & Security guide to get information on how to setup and use this package.

Patch vs. Hotfix vs. Maintenance Release vs. Service Pack vs

When you are somewhere between version 1 and version 2, what do you do to maintain your software?
The terms Patch, Hotfix, Maintenance Release, Service Pack, and others are all blurry from my point of view, with different definitions depending on who you talk to.
What do you call your incremental maintenance efforts between releases?
When I hear those terms this is what comes to mind:
Patch - Publicly released update to
fix a known bug/issue
Hotfix - update to fix a very
specific issue, not always publicly
released
Maintenance Release - Incremental
update between service packs or
software versions to fix multiple
outstanding issues
Service Pack - Large Update that
fixes many outstanding issues,
normally includes all Patches,
Hotfixes, Maintenance releases that
predate the service pack
That being said that isn't how we do updates at all. We just increment the version and/or build number (which is based on the date) and just call it an "Update". For most software I find that easier, you can easily see that one computer is running 1.1.50 vs 1.2.25 and know which is newer.
A hotfix is a fix for a specific issue which is applied while the system is still active (hot). This comes from the older terms like hotswapping and hotswitching. Yes, the term is commonly misused these days by people not involved in the industry.
I'd like to point to http://semver.org/ for an attempt to define version numbers in sane manner, and the definitions given there actually fit closely to how I use version numbers (or how I wish I used them :))
As for the term definitions, I find patch and hotfix very similar, except "hotfix" is usually not broadcast if done to a service.
Maintenance Release and Service Pack fit fairly closely to the two denominations of version numbers. if you have a version number structure like X.Y.Z, Maintenance Release would be the Z, Service Pack would be the Y. I've really only heard these terms in big, corporate products, though. I'm more acquainted with the minor/mayor version terms.
Of course, every shop has their own use of the terms, and it depends on which type of user you're targeting. For end-users of MMOs, for instance, every update is a "patch" because the user has to "patch their client" to apply it, while for end-users of more common software, you often just have the term "update" and "new version" (new mayor version).

Resources