I'm having some trouble using prisma and planetscale
{
"id": 1,
"name": "Flor Esmaltada",
"price": 75,
"image": "https://ik.imagekit.io/gabriellazcano/imajo/Flor_2/flor_2_blanca_transparente_5RPp4nKfA.png",
"description": "Flor de plata 925 con piedra austriaca en muchisimos colores, elige los que más te gusten. Hechos a mano en México.",
"categories": [
"piedras"
],
"alternativeImages": [
"https://ik.imagekit.io/gabriellazcano/imajo/flor_con_tallo/flor_con_tallo_lado_Hx8HBrH7e.png",
"https://ik.imagekit.io/gabriellazcano/imajo/flor_con_tallo/flor_con_tallo_lateral_CP5MsqTGQ.png"
],
"colors": []
}
Im getting this object in dev
{
"id": 1,
"name": "Flor Esmaltada",
"price": 75
}
And this one in production
In planescale shell I'm getting the first schema and all the objects are there but when deployed to Vercel it only shows those 3 fields
I'm expecting having the first schema on production
Related
I'm new to the Make enviroment. I'm trying to send a Whatsapp Message with this template.
Te dejamos tu numero de seguimiento: {{1}}
Recordá que para verificar su estado, tenés que ingresar a:
andreani.com
oca.com.ar
Tu numero de pedido es # {{2}} si necesitas hacer una consulta.
This is my code:
{
"messaging_product": "whatsapp",
"to": "{{2.countryCallingCode}}{{2.phone}}", "type": "template",
"template": {
"name": "ecommerce_delivery",
"language": {
"code": "es"
},
"components": [{
"type": "body",
"parameters": [{
"type": "text",
"text": "{{1.metaData[33].value}}"
},{
"type": "text",
"text": "{{1.id}}"
}]
}
]
}
When I test it, I get this.
"error":{"message":"(#100) The parameter messaging_product is required.","type":"OAuthException"
I can send the message with this code, but it just the template without variables.
{
"messaging_product": "whatsapp",
"to": "543815462685", "type": "template",
"template": {
"name": "hello_world",
"language": {
"code": "en_US"
}
}
}
HERE REST API fleet.ls
Consider the following REST API call. Note the Long Lats are in Adelaide Australia which has a TZ of +9:30.
https://fleet.ls.hereapi.com/2/calculateroute.json?waypoint0=-34.8751,138.5276&waypoint1=-34.9042,138.5708;sort&waypoint2=stopOver,600!-34.893,138.5546;sort&departure=2021-01-08T17:15:00&mode=fastest;car;traffic:enabled&legAttributes=-li,-mn,le,bt,tt,-tm,sh&routeAttributes=sm,wp&apikey=xxxxxx
The Departure time is set to:
departure=2021-01-08T17:15:00
However the summary returns the following:
"summary": {
"travelTime": 1010,
"distance": 5102,
"baseTime": 882,
"trafficTime": 1010,
"flags": [],
"departure": "2021-01-08T**17:15:00+10**",
"arrival": "2021-01-08T17:31:49+10"
}
The absolute time is incorrect as the location has a timezone of +9.5 (or +10.5 DST). This, is then passed through to other algorithms used and, well, messes everything up by a half an hour.
It appears you are using version 7 of the Calculate Route API, as shown here. It shows timestamps with only whole-hour offsets, such as 2013-07-04T17:00:00+02.
The docs here for version 8+ of the API show a very different format for its output, including timestamps with full hours and minutes in the offset, such as 2019-12-09T16:05:05+01:00. The full example in docs is:
{
"routes": [
{
"id": "bfaed7d0-19c7-4e72-81b7-24eeb148b62b",
"sections": [
{
"arrival": {
"place": {
"location": {
"lat": 52.53232637420297,
"lng": 13.378873988986015
},
"type": "place"
},
"time": "2019-12-09T16:05:05+01:00"
},
"departure": {
"place": {
"location": {
"lat": 52.53098367713392,
"lng": 13.384566977620125
},
"type": "place"
},
"time": "2019-12-09T16:03:02+01:00"
},
"id": "85357f8f-00ad-447e-a510-d8c02e0b264f",
"summary": {
"duration": 123,
"length": 538
},
"transport": {
"mode": "car"
},
"type": "vehicle"
}
]
}
]
}
I suggest you use the latest v8 of the API (8.14.0 at time of writing this). It should give the correct offsets for Adelaide.
I'm looking at template generated for adding Web App which is generated by Azure portal. I chose .NET core as runtime and it's passed to metadata field in generated ARM template below with a value of dotnetcore. The end result is resource created in Azure with all the stuff you expect from web app. I don't see this field being documented and or explanation how it's being used. Is it some internal know-how or how this process works?
"resources": [
{
"apiVersion": "2018-11-01",
"name": "[parameters('name')]",
"type": "Microsoft.Web/sites",
"location": "[parameters('location')]",
"tags": {},
"dependsOn": [],
"properties": {
"name": "[parameters('name')]",
"siteConfig": {
"appSettings": [
{
"name": "ANCM_ADDITIONAL_ERROR_PAGE_LINK",
"value": "[parameters('errorLink')]"
}
],
"metadata": [
{
"name": "CURRENT_STACK",
"value": "[parameters('currentStack')]"
}
],
"phpVersion": "[parameters('phpVersion')]",
"alwaysOn": "[parameters('alwaysOn')]"
},
"serverFarmId": "[concat('/subscriptions/', parameters('subscriptionId'),'/resourcegroups/', parameters('serverFarmResourceGroup'), '/providers/Microsoft.Web/serverfarms/', parameters('hostingPlanName'))]",
"hostingEnvironment": "[parameters('hostingEnvironment')]",
"clientAffinityEnabled": true
}
}
]
When I am sending text using Watson api NLU with my city which is located in India. I am getting empty entity. It should be come with data entity location. So how can i solve this problem in watson NLU.
The sentence being sent is:
mba college in bhubaneswar
where Bhubaneswar is the city
So based on your comment sentence of:
"mba college in bhubaneswar"
Putting that into NLU and entity detection fails with:
Error: unsupported text language: unknown, Code: 400
The first issue is that because no language is specified, it tries to guess the language. But there is not enough there to guess (even if it is obvious to you).
The second issue is, even if you specify the language it will not fully recognise. This is because it's not a real sentence, it's a fragment.
NLU doesn't just do a keyword lookup, It tries to understand the parts of speech (POS) and from that, determine what the word means.
So if I give it a real sentence it will work. For example:
I go to an MBA college in Bhubaneswar
I used this sample code:
import json
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding_v1 import Features, EntitiesOptions, RelationsOptions
ctx = {
"url": "https://gateway.watsonplatform.net/natural-language-understanding/api",
"username": "USERNAME",
"password": "PASSWORD"
}
version = '2017-02-27'
text = "I go to an MBA college in Bhubaneswar"
#text = "mba college in bhubaneswar"
nlu = NaturalLanguageUnderstandingV1(version=version, username=ctx.get('username'),password=ctx.get('password'))
entities = EntitiesOptions()
relations = RelationsOptions()
response = nlu.analyze(text=text, features=Features(entities=entities,relations=relations),language='en')
print(json.dumps(response, indent=2))
That gives me the following results.
{
"usage": {
"text_units": 1,
"text_characters": 37,
"features": 2
},
"relations": [
{
"type": "basedIn",
"sentence": "I go to an MBA college in Bhubaneswar",
"score": 0.669215,
"arguments": [
{
"text": "college",
"location": [
15,
22
],
"entities": [
{
"type": "Organization",
"text": "college"
}
]
},
{
"text": "Bhubaneswar",
"location": [
26,
37
],
"entities": [
{
"type": "GeopoliticalEntity",
"text": "Bhubaneswar"
}
]
}
]
}
],
"language": "en",
"entities": [
{
"type": "Location",
"text": "Bhubaneswar",
"relevance": 0.33,
"disambiguation": {
"subtype": [
"IndianCity",
"City"
],
"name": "Bhubaneswar",
"dbpedia_resource": "http://dbpedia.org/resource/Bhubaneswar"
},
"count": 1
}
]
}
If it's a case you are only going to get fragments to scan, then #ReeceMed solution will resolve it for you.
Screenshot of NLU Service responseIf the NLU service does not recognise the city you have entered you can create a custom model using Watson Knowledge Studio which can then be deployed to the NLU Service, giving customised entities and relationships.
More specifically I would like to get the labels in french, so instead of:
"labelAnnotations": [
{
"mid": "/m/019sc",
"description": "black",
"score": 0.95744693
},
{
"mid": "/m/07s6nbt",
"description": "text",
"score": 0.9175479
},
{
"mid": "/m/01zbnw",
"description": "screenshot",
"score": 0.8477094
}]
I would like to get:
"labelAnnotations": [
{
"mid": "/m/019sc",
"description": "noir",
"score": 0.95744693
},
{
"mid": "/m/07s6nbt",
"description": "texte",
"score": 0.9175479
},
{
"mid": "/m/01zbnw",
"description": "capture d'écran",
"score": 0.8477094
}]
Or is Google Translate API my only solution currently?
Google Cloud Vision API LABEL_DETECTION are returned in English only. The Cloud Translate API can translate English labels into any of a number of other languages.
Reference Detect Labels