I'm communicating with a tensorflow model server via gRPC in order to get predictions on my data. The python3 app used to establish the connection is based on a tutorial and does its job successfully.
However, the tutorial code uses the python gRPC's beta API and I am trying to be as up-to-date as possible. What's the Python gRPC GA API's equivalent piece of code to establish a secure connection to a model server and to perform a prediction?
Here's the code that does work using dummy data
# Requirements:
# tensorflow-serving-api==1.9.0
# tensorflow==1.11.0
import os
port = os.getenv("PORT")
host = os.getenv("HOST")
cert = os.getenv("CERT") # SSL certificate
token = os.getenv("TOKEN") # Authorization token
from grpc.beta import implementations as grpc_beta_impl
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
creds = grpc_beta_impl.ssl_channel_credentials(root_certificates=cert.encode())
channel = grpc_beta_impl.secure_channel(host, int(port), creds)
stub = prediction_service_pb2.beta_create_PredictionService_stub(
channel,
metadata_transformer=lambda metadata: tuple(metadata) + (("authorization", "Bearer " + token),))
data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
request = predict_pb2.PredictRequest()
request.model_spec.name = "housing"
request.model_spec.signature_name = 'serving_default'
tfutil = tf.contrib.util
t_proto = tfutil.make_tensor_proto(data,
shape=[1, len(data)],
dtype="float")
request.inputs["input_data"].CopyFrom(t_proto)
pred = stub.Predict(request, 1500)
price = pred.outputs['dense_5/BiasAdd:0'].float_val[0]
print(str(price))
I've tried substituting the the beta API calls with their GA API equivalent. However, there's no such thing as a metadata_transformer in the stub anymore and neither do I understand its purpose nor do I know where to stuff it.
Here's the pertinent piece of code followed by the error message which now requires tf-serving version 1.11.
# Requirements tensorflow-serving==1.11.0
# ... load envvars. Same as before
import grpc
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
creds = grpc.ssl_channel_credentials(root_certificates=cert.encode())
channel = grpc.secure_channel(host + ":" + port, creds)
stub = prediction_service_pb2_grpc.PredictionServiceStub(
channel)
# ... Same as above
Error: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "no authorization metadata found"
debug_error_string = "{"created":"#1567680306.923100899","description":"Error received from peer ipv4:xx.xxx.xxx.xx:443","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"no authorization metadata found","grpc_status":16}"
>
I don't have any control over the model server and I don't know whether what I'm attempting is even possible (i.e. is there a server-side pendant of the python gRPC Beta API which must implement the GA API in order for me to succeed?)
Any help is greatly appreciated.
Related
After successful login from the consent screen, I am getting the access_token now the next step is to fetch all the view id from the google analytics account.Please help me out
Example: This is the access_token("ya29.A0ARrdaM8IvLg8jjVHWgxneSp_mxgFYHpKt4LwPGZEVqzOphMA2Cll6mjMxlQRFanbJHh1WrBEYVe2Y1BvBU6j7h_17nVeY4h-FWdUuv5bo0rzETTz_-xw4t5ZNBYpj26Cy3u4Y1trZnqVIA4")
You should check the Managment api quickstart python
"""A simple example of how to access the Google Analytics API."""
import argparse
from apiclient.discovery import build
import httplib2
from oauth2client import client
from oauth2client import file
from oauth2client import tools
def get_service(api_name, api_version, scope, client_secrets_path):
"""Get a service that communicates to a Google API.
Args:
api_name: string The name of the api to connect to.
api_version: string The api version to connect to.
scope: A list of strings representing the auth scopes to authorize for the
connection.
client_secrets_path: string A path to a valid client secrets file.
Returns:
A service that is connected to the specified API.
"""
# Parse command-line arguments.
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[tools.argparser])
flags = parser.parse_args([])
# Set up a Flow object to be used if we need to authenticate.
flow = client.flow_from_clientsecrets(
client_secrets_path, scope=scope,
message=tools.message_if_missing(client_secrets_path))
# Prepare credentials, and authorize HTTP object with them.
# If the credentials don't exist or are invalid run through the native client
# flow. The Storage object will ensure that if successful the good
# credentials will get written back to a file.
storage = file.Storage(api_name + '.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = tools.run_flow(flow, storage, flags)
http = credentials.authorize(http=httplib2.Http())
# Build the service object.
service = build(api_name, api_version, http=http)
return service
def get_first_profile_id(service):
# Use the Analytics service object to get the first profile id.
# Get a list of all Google Analytics accounts for the authorized user.
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
# Get the first Google Analytics account.
account = accounts.get('items')[0].get('id')
# Get a list of all the properties for the first account.
properties = service.management().webproperties().list(
accountId=account).execute()
if properties.get('items'):
# Get the first property id.
property = properties.get('items')[0].get('id')
# Get a list of all views (profiles) for the first property.
profiles = service.management().profiles().list(
accountId=account,
webPropertyId=property).execute()
if profiles.get('items'):
# return the first view (profile) id.
return profiles.get('items')[0].get('id')
return None
def get_results(service, profile_id):
# Use the Analytics Service Object to query the Core Reporting API
# for the number of sessions in the past seven days.
return service.data().ga().get(
ids='ga:' + profile_id,
start_date='7daysAgo',
end_date='today',
metrics='ga:sessions').execute()
def print_results(results):
# Print data nicely for the user.
if results:
print 'View (Profile): %s' % results.get('profileInfo').get('profileName')
print 'Total Sessions: %s' % results.get('rows')[0][0]
else:
print 'No results found'
def main():
# Define the auth scopes to request.
scope = ['https://www.googleapis.com/auth/analytics.readonly']
# Authenticate and construct service.
service = get_service('analytics', 'v3', scope, 'client_secrets.json')
profile = get_first_profile_id(service)
print_results(get_results(service, profile))
if __name__ == '__main__':
main()
We have different microservices(function apps, vm servers, etc) logging to application insights. A simple python http server is hosted on a linux VM, I want this server to receive a traceparent http header (W3C tracing) log the information to application insights. This python server should create a separate node in the Application map.
I am able to extract the span context from traceparent http header and use it to log the information. But i am not able to view it as a separate node in Application map.
There are middlewares for flask,django for tracing the requests. But there is no ready made solution available for python simple http server.
The goal is to have this python server on vm be represented as a separate node in Application map.
Attaching my python script for reference. (this code was written using the code from flask-middleware)
import six
import logging
import sys
from opencensus.ext.azure.log_exporter import AzureLogHandler
from google.rpc import code_pb2
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.common import configuration
from opencensus.trace import (
attributes_helper,
execution_context,
print_exporter,
samplers,
)
from opencensus.trace import span as span_module
from opencensus.trace import stack_trace, status
from opencensus.trace import tracer as tracer_module
from opencensus.trace import utils
from opencensus.trace.propagation import trace_context_http_header_format
from opencensus.trace import config_integration
HTTP_HOST = attributes_helper.COMMON_ATTRIBUTES['HTTP_HOST']
HTTP_METHOD = attributes_helper.COMMON_ATTRIBUTES['HTTP_METHOD']
HTTP_PATH = attributes_helper.COMMON_ATTRIBUTES['HTTP_PATH']
HTTP_ROUTE = attributes_helper.COMMON_ATTRIBUTES['HTTP_ROUTE']
HTTP_URL = attributes_helper.COMMON_ATTRIBUTES['HTTP_URL']
HTTP_STATUS_CODE = attributes_helper.COMMON_ATTRIBUTES['HTTP_STATUS_CODE']
EXCLUDELIST_PATHS = 'EXCLUDELIST_PATHS'
EXCLUDELIST_HOSTNAMES = 'EXCLUDELIST_HOSTNAMES'
config_integration.trace_integrations(['logging'])
trace_parent_header= "00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01"
APP_INSIGHTS_KEY = "KEY HERE"
logging.basicConfig(
format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
log = logging.getLogger(__name__)
def callback_function(envelope):
envelope.tags['ai.cloud.role'] = 'Pixm Agent'
handler = AzureLogHandler(
connection_string='InstrumentationKey=APP_INSIGHTS_KEY')
handler.setFormatter(logging.Formatter('%(traceId)s %(spanId)s %(message)s'))
handler.add_telemetry_processor(callback_function)
log.addHandler(handler)
propogator = trace_context_http_header_format.TraceContextPropagator()
sampler = samplers.ProbabilitySampler(rate=1.0)
exporter = AzureExporter(
connection_string="InstrumentationKey=APP_INSIGHTS_KEY")
exporter.add_telemetry_processor(callback_function)
try:
span_context = propogator.from_headers(
{"traceparent": trace_parent_header})
log.info("he...")
tracer = tracer_module.Tracer(
span_context=span_context,
sampler=sampler,
exporter=exporter,
propagator=propogator)
span = tracer.start_span()
span.span_kind = span_module.SpanKind.SERVER
# Set the span name as the name of the current module name
span.name = '[{}]{}'.format(
'get',
'testurl')
tracer.add_attribute_to_current_span(
HTTP_HOST, 'testurlhost'
)
tracer.add_attribute_to_current_span(
HTTP_METHOD, 'get'
)
tracer.add_attribute_to_current_span(
HTTP_PATH, 'testurlpath'
)
tracer.add_attribute_to_current_span(
HTTP_URL, str('testurl')
)
# execution_context.set_opencensus_attr(
# 'excludelist_hostnames',
# self.excludelist_hostnames
# )
with tracer.span(name="main-ashish"):
for i in range(0, 10):
log.warning("identity logs..."+str(i))
except Exception: # pragma: NO COVER
log.error('Failed to trace request', exc_info=True)
The Application Map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
OpenCensus Python telemetry processors
You can modify cloud_RoleName by changing the ai.cloud.role attribute in the tags field.
def callback_function(envelope):
envelope.tags['ai.cloud.role'] = 'new_role_name'
# AzureLogHandler
handler.add_telemetry_processor(callback_function)
# AzureExporter
exporter.add_telemetry_processor(callback_function)
Correlation headers using W3C TraceContext to log the information to Application Insights
Application Insights is transitioning to W3C Trace-Context, which defines:
traceparent: Carries the globally unique operation ID and unique identifier of the call.
tracestate: Carries system-specific tracing context.
The latest version of the Application Insights SDK supports the Trace-Context protocol.
The correlation HTTP protocol, also called Request-Id, is being deprecated. This protocol defines two headers:
Request-Id: Carries the globally unique ID of the call.
Correlation-Context: Carries the name-value pairs collection of the distributed trace properties.
import logging
from opencensus.trace import config_integration
from opencensus.trace.samplers import AlwaysOnSampler
from opencensus.trace.tracer import Tracer
config_integration.trace_integrations(['logging'])
logging.basicConfig(format='%(asctime)s traceId=%(traceId)s spanId=%(spanId)s %(message)s')
tracer = Tracer(sampler=AlwaysOnSampler())
logger = logging.getLogger(__name__)
logger.warning('Before the span')
with tracer.span(name='hello'):
logger.warning('In the span')
logger.warning('After the span')
You can refer to Application Map: Triage Distributed Applications, Telemetry correlation in Application Insights, and Track incoming requests with OpenCensus Python
I have manually created and distributed the required certificates for Corda nodes. Now for the nodes to start, among other things, they need to have a network parameter. The problem is that if I use the Corda network bootstrapper tool to generate the network parameter, the file will be signed by another issuer ("C=UK, L=London, OU=corda, O=R3, CN=Corda Node Root CA") which is different from the issuer of my certificates. My question is how can I manually create a network parameter so I can specify the correct issuer to avoid conflicts during node startup?
So I have figured a way to create network parameters:
private fun getSignedNetworkParameters(): NetworkParameters {
//load the notary from a Keystore. This avoids having to start a flow from a node in order to retrieve NotaryInfo
val notary = loadKeyStore("\\path-to-keystore\\keystore.jks", "keystore-password")
val certificateAndKeyPair : CertificateAndKeyPair = notary.getCertificateAndKeyPair("entry-alias", "entry-password")
val notaryParty = Party(certificateAndKeyPair.certificate)
val notaryInfo = listOf(NotaryInfo(notaryParty, false))
//map contract ID to the SHA-256 hash of its CorDapp contracts&states JAR file
val whitelistedContractImplementations = mapOf(
TestContract.TEST_CONTRACT_ID to listOf(getCheckSum(contractFile))
)
return NetworkParameters(minimumPlatformVersion = 3, notaries = notaryInfo,
maxMessageSize = n, maxTransactionSize = n, modifiedTime = Instant.now(),
epoch = 1, whitelistedContractImplementations = whitelistedContractImplementations)
}
You could sign your certificates with the development certificate that is used by the network bootstrapper: https://github.com/corda/corda/tree/master/node-api/src/main/resources/certificates
If that doesn't work for you, you could try this experimental tool: https://github.com/corda/corda/blob/master/experimental/netparams/src/main/kotlin/net.corda.netparams/NetParams.kt . I can't promise that it works with Corda 3.3 though.
I am using svSocket package in R to create a socket server. I have successfully created server using startSocketServer(...). I am able to connect my application to the server and send data from server to the application. But I am struggeling with reading of messages sent by application. I couldn't find any example for that on internet. I found only processSocket(...) example in documentation of vsSocket (see below) which describes the function that processes a command coming from the socket. But I want only read socket messages comming to the server in repeat block and print them on the screen for testing.
## Not run:
# ## A simple REPL (R eval/process loop) using basic features of processSocket()
# repl <- function ()
# {
# pars <- parSocket("repl", "", bare = FALSE) # Parameterize the loop
# cat("Enter R code, hit <CTRL-C> or <ESC> to exit\n> ") # First prompt
# repeat {
# entry <- readLines(n = 1) # Read a line of entry
# if (entry == "") entry <- "<<<esc>>>" # Exit from multiline mode
# cat(processSocket(entry, "repl", "")) # Process the entry
# }
# }
# repl()
# ## End(Not run)
Thx for your input.
EDIT:
Here more specific example of socket server creation and sending message:
require(svSocket)
#start server
svSocket::startSocketServer(
port = 9999,
server.name = "test_server",
procfun = processSocket,
secure = FALSE,
local = FALSE
)
#test calls
svSocket::getSocketClients(port = 9999) #ip and port of client connected
svSocket::getSocketClientsNames(port = 9999) #name of client connected
svSocket::getSocketServerName(port = 9999) #name of socket server given during creation
svSocket::getSocketServers() #server name and port
#send message to client
svSocket::sendSocketClients(
text = "send this message to the client",
sockets = svSocket::getSocketClientsNames(port = 9999),
serverport = 9999
)
... and response of the code above is:
> require(svSocket)
>
> #start server
> svSocket::startSocketServer(
+ port = 9999,
+ server.name = "test_server",
+ procfun = processSocket,
+ secure = FALSE,
+ local = FALSE
+ )
[1] TRUE
>
> #test calls
> svSocket::getSocketClients(port = 9999) #ip and port of client connected
sock0000000005C576B0
"192.168.2.1:55427"
> svSocket::getSocketClientsNames(port = 9999) #name of client connected
[1] "sock0000000005C576B0"
> svSocket::getSocketServerName(port = 9999) #name of socket server given during creation
[1] "test_server"
> svSocket::getSocketServers() #server name and port
test_server
9999
>
> #send message to client
> svSocket::sendSocketClients(
+ text = "send this message to the client",
+ sockets = svSocket::getSocketClientsNames(port = 9999),
+ serverport = 9999
+ )
>
What you can see is:
successfull creation of socket server
successfull connection of external client sock0000000005C576B0 (192.168.2.1:55427) to the server
successfull sending of message to the client (here no explizit output is provided in console, but the client reacts as awaited
what I am still not able to implement is to fetch client messages sent to the server. Could somebody provide me an example on that?
For interaction with the server from the client side, see ?evalServer.
Otherwise, it is your processSocket() function (either the default one, or a custom function you provide) that is the entry point triggered when the server got some data from one connected client. From there, you have two possibilities:
The simplest one is just to use the default processSocket() function. Besides some special code between <<<>>>, which is interpreted as special commands, the default version will evaluate R code on the server side. So, just call the function you want on the server. For instance, define f <- function(txt) paste("Fake process ", txt) on the server, and call evalServer(con, "f('some text')") on the client. Your custom f() function is executed on the server. Just take care that you need to double quote expressions that contain text here.
An alternate solution is to define your own processSocket() function to capture messages sent by the client to the server earlier. This is safer for a server that needs to process a limited number of message types without parsing and evaluating R code received from the client.
Now, the server is asynchronous, meaning that you still got the prompt available on the server, while it is listening to client(s) and processing their requests.
I'm trying to implement the following pattern in a program that interfaces with the DGS website using the HTTP library:
log in
get some data
let the user muck with the data
send the modified data back
start again at step two
It works great on Linux, but from Windows, the program prints Network.Browser.request: Error raised ErrorClosed at step four. I've distilled the above pattern into a minimal test case below:
import Control.Concurrent
import Network.Browser
import Network.HTTP
import Network.URI
auth = URIAuth
{ uriRegName = "dragongoserver.sourceforge.net"
, uriUserInfo = ""
, uriPort = ""
}
uri path = nullURI
{ uriScheme = "http:"
, uriAuthority = Just auth
, uriPath = '/' : path
}
get path = request . formToRequest . Form GET (uri path)
main = browse $ do
get "login.php" [("quick_mode", "1"), ("userid", "smartypants"), ("passwd", "smartypants")]
ioAction (threadDelay 5000000)
get "sgf.php" [("gid", "491179")]
How can I keep the connection open?