How do I include multiple proto files in a .NetCore gRPC project? - .net-core

I'm creating a .NetCore C# desktop app using gRPC.
I'm attempting to use the Orchestrator pattern (microservices).
The basic design is:
Client (requester) <--> Orchestrator <--> multiple different request handlers
Each RequestHandler will have it's own set of messages.
Is it possible to have "Client" GrpcServices within the same project file,
or do I need to include all my proto message definitions within a single file?
With the setup listed below, my projects fail to load.
Here is the related snippet from my .csproj file:
<ItemGroup>
<Protobuf Include="../Protos/linuxservice.proto" GrpcServices="Client" ProtoRoot="../Protos/">
<Link>Protos/linuxservice.proto</Link>
</Protobuf>
<Protobuf Include="../Protos/orchestrator.proto" GrpcServices="Server" ProtoRoot="../Protos/">
<Link>Protos/orchestrator.proto</Link>
</Protobuf>
<Protobuf Include="../Protos/service.proto" GrpcServices="Client" ProtoRoot="../Protos/">
<Link>Protos/service.proto</Link>
</Protobuf>
Protobuf Include="../Protos/windowsservice.proto" GrpcServices="Client" ProtoRoot="../Protos/">
<Link>Protos/windowsservice.proto</Link>
</Protobuf>
</ItemGroup>

Well after so much trial and error I managed to create several props and that each one of them is consumed by its respective service.
Create your proto file
Check the properties of the file.
Build Action => Protobuf, compiler grpc subclasses => server only,
Compile protobuf => yes
Recompile the project and this automatically generates the Base class
add your service in app.MapGrpcService
I put the link where he gave me a guide, I hope it helps you.
https://medium.com/#lukastosic/part-1-grpc-its-fairly-simple-c-example-d4b266c4c72e
Blessings

Related

How to get config file for Cordapp?

How to get config file for Cordapp in Java like in sample?
serviceHub.getAppContext().config
After getAppContext() i can't find anything about config file.
This feature is not available in Corda 3. As of this answer, it is only part of the unstable master branch. It will be included in a future release of Corda.
However, you can implement this manually for now as follows: How to provide a CorDapp with custom config in Corda?.

BizTalk Deployment and Business Rules

I am a newbie to BizTalk development, having only been using it properly for 6-7 weeks, so forgive my naivety.
I have a basic BizTalk 2013 application in development and am ready to deploy to a test environment.
I am using business rules to define the Outbound Transport Location, after all transform have been done, this sends the data to a stored procedure in SQL Server, which inserts/updates the record:
mssql://.//db1?
When we deploy to our test/live environments, we will not be able to set the Outbound Transport Location to the local machine as the databases will be stored on separate servers to the application. For example:
mssql://dbserver//db1?
I have looked through the BizTalk Deployment Framework to see if business rules can be modified depending on environment, but couldn't find anything.
So my question is, what is the best (lowest maintenance) way to manage environment based settings for business rules? Using the BizTalk Deployment Framework would be preferable.
I'll post the solution I used for future reference and to help anyone who comes across this in the future.
In the BizTalk Deployment Framework, it is possible to add additional XML files to the build, and pre-process them in the same way binding files are pre-processed depending on environment.
Below are some snippets from the deployment.btdfproj file. Don't forget that with the BizTalk Deployment Framework, order is essential:
<!-- Add the policy file as an additional item to the build -->
<ItemGroup>
<AdditionalFiles Include="my_policy_file.xml">
<LocationPath>..\$(ProjectName)\location_to_policy</LocationPath>
</AdditionalFiles>
</ItemGroup>
<!-- Processes the additional XML policy files added to the MSI main build folder. -->
<ItemGroup>
<FilesToXmlPreprocess Include="my_policy_file.xml">
<LocationPath>..\</LocationPath>
</FilesToXmlPreprocess>
</ItemGroup>
<!-- You still have to add the business rule to the build. It is overwritten later. -->
<ItemGroup>
<RulePolicies Include="my_policy_file.xml">
<LocationPath>..\$(ProjectName)\location_to_property</LocationPath>
</RulePolicies>
</ItemGroup>
<!-- Towards the end of the file the pre-processed file overwrites the originally included policy file. -->
<Target Name="CopyXMLPreprocessedPoliciesToBRE" AfterTargets="PreprocessFiles">
<copy sourceFiles="..\my_policy_file.xml" DestinationFolder="..\BRE\Policies"/>
</Target>
For more information, check out this thread on the BizTalk Deployment Framework site: https://biztalkdeployment.codeplex.com/discussions/392801

what is api-paste.ini file in openstack

I've seen api-paste.ini as a conf file after installing openstack.
It looks like substituting some prefixes for python implementation but have no clue about this.
Here, my questions are:
What script is it?
it looks like very bizarre grammar like the following:
[composite:metadata]
use = egg:Paste#urlmap
/: meta
How does it work within python script?
See documentation for Paste Deploy.
The api-paste.ini is the configuration for the above web-services framework. Paste.deploy allows you to separate concerns between writing your application and middleware/filters from the composition of them into a web service. You define you WSGI applications and any middleware filters in the configuration file then you can compose pipelines which include the middleware/filters you want into your web-service, e.g. authentication, rate-limiting, etc.
You want to temporarily remove authentication, take it out of your pipeline and restart your web service.
The declaration above is declaring a composite application, but with only one application binding (a bit unnecessary - normally you would expect to see more than one binding, e.g. for different versions of the application). The WSGI application app:meta will be bound to / and you should have a declaration of app:meta later in the file. The implementation of the composite app is declared via the use and the egg:Paste#urlmap is a simple reference implementation.
You load this in your program with paste.deploy.loadwsgi.loadapp().
There is a proposal/recommendation(?) to move away from Paste Deploy/WebOb to WSME/Pecan see OpenStack Common WSGI

Cleanest Jetty Configuration for Development?

EDIT: I think I should clarify my intent...
I'm trying to simplify the development iteration cycle of write-code >> build WAR >> deploy >> refresh >> repeat. I'd like to be relatively independent of IDE (i.e., I don't want Eclipse or IntelliJ plug-ins doing the work). I want to be able to edit code/static files and build as needed into my WAR source directory, and just have run/debug setup as a command line call to a centralized Jetty installation.
Later I'd like to be able to perform the actual deployment using generally the same setup but with a packaged up WAR. I don't want to have my app code specific to my IDE or Jetty.
So perhaps a better way to ask this question is What have you found is the cleanest way to use Jetty as your dev/debug app server?
Say I want to have a minimal Jetty 7 installation. I want as minimal of XML configuration as possible, I just need the raw Servlet API, no JSP, no filtering, etc. I just want to be able to have some custom servlets and have static files served up if they exist. This will be the only WAR and it will sit as the root for a given port.
Ideally, for ease of deployment I'd like to have the Jetty directory just be the standard download, and my WAR / XML config be separate from these standard Jetty files. In my invocation of Jetty I'd like to pass in this minimal XML and go.
I'm finding that the documentation is all over the place and much of it is for Jetty 6 or specific to various other packages (Spring, etc.). I figure if I have this minimal configuration down then adding additional abstractions on top will be a lot cleaner. Also it will allow me to more cleanly deal with embedded-Jetty scenarios.
This SO question is an example scenario where this XML would be useful Jetty Run War Using only command line
What would be the minimal XML needed for specifying this one WAR location and the hosts/port to serve it?
Thanks in advance for any snippets or links.
Jetty has migrated to Eclipse. There is very subtle info on this. This also led in change in package name, which is another level of nuance. They did publish a util to convert Jetty6 setting to Jetty 7 setting, but again -- not very popular. I am dissapointed from Eclipse Jetty forum. Here is where you should look for documentation on Jetty 7 onwards http://wiki.eclipse.org/Jetty/Starting
I think this is the minimal jetty.xml taken from http://wiki.eclipse.org/Jetty/Reference/jetty.xml
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">\
<Configure id="Server" class="org.eclipse.jetty.server.Server">
</Configure>
But, I would rather like to start from a copy of $JETTY_HOME/etc/jetty.xml and would modify from there.
If you are Okay with $JETTY_HOME/webapps directory, you can set up the port by modifying this part
<Configure id="Server" class="org.eclipse.jetty.server.Server">
...
<Call name="addConnector">
<Arg>
<New class="org.eclipse.jetty.server.nio.SelectChannelConnector">
<Set name="host"><Property name="jetty.host" /></Set>
<Set name="port"><Property name="jetty.port" default="7777"/></Set>
<Set name="maxIdleTime">300000</Set>
<Set name="Acceptors">2</Set>
<Set name="statsOn">false</Set>
<Set name="confidentialPort">8443</Set>
<Set name="lowResourcesConnections">20000</Set>
<Set name="lowResourcesMaxIdleTime">5000</Set>
</New>
</Arg>
</Call>
....
</Configure>
Else, I will modify context.xml the way explained here (for Jetty 7) How to serve webbapp A from portA and webapp B from portB
Also refer these pages:
http://wiki.eclipse.org/Jetty/Reference/jetty.xml
http://wiki.eclipse.org/Jetty/Reference/jetty.xml_syntax
http://communitymapbuilder.org/display/JETTY/JNDI
....
Edit#1: sorry for wrong URL for webapp per connector. I have updated the link to How to serve webbapp A from portA and webapp B from portB to point to the doc that is meant for Jetty 7.
Update on 'how you deal with Jetty on various environments?'
Dev
We use Maven, so embeded Jetty works for us. We just run mvn clean install run:jetty and the port is configured in Maven's config file, namely pom.xml. This is not IDE dependent plus Jetty can easily be embedded using ANT, but I never tried.
Test
We have stand-alone Jetty running. I've configured port and tuned parameters, removed default apps (e.g. root.war etc) and created a context.xml with app specific ports and deployment directory. (Unfortunately, I have asked this question on Eclipse Jetty's mailing list and no one bothered to answer). This is one time setting.
For test builds/deployments, we have a build script that builds the WAR as per test env specs and then uploads it to test environment. After, that we invoke a shell script that (1)stops Jetty, (2) copies war file to myApp's webapp direactory and (3) restarts Jetty.
However, easier way to do this is by using Maven's Cargo plugin. The bad luck was that I was using Jetty 7.1.6 which was incompatible with Cargo. Later they fixed it, but I had got my job done by custom script.
Prod
Prod has almost same procedure as test, except. The tunings are done for higher security and load-balancing. But from deployment POV, there is nothing different from Test case to Prod.
Notice that I have not bothered about what XML files are and how many must be there. I have just used the ones that are my concerns -- jetty.xml and context.xml. Plus, I found it's much cleaner to use jetty.conf and jetty.sh for passing JVM params, custom XMLs and for starting and stopping.
Hope this helps.
On hot deployment:
Now, if you use Maven and use embedded Jetty. It just knows when the code is changed -- like "gunshot sniffer". In dev envt, you run Jetty, make changes, refresh page, and see your changes -- hot deployment. Find more here http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin look for scanIntervalSeconds
This doesn't fully answer your question, but in case it helps, here's some pretty minimal code using embedded Jetty 7 to fire up a server with one root servlet:
HandlerCollection handlers = new HandlerCollection();
ServletContextHandler root = new ServletContextHandler(handlers, "/", ServletContextHandler.NO_SESSIONS|ServletContextHandler.NO_SECURITY);
root.addServlet(new ServletHolder(new MyServlet()), "/*");
Server server = new Server(8080);
server.setHandler(handlers);
server.start();
See of course http://wiki.eclipse.org/Jetty/Tutorial/Embedding_Jetty.
If you are building with maven (which is IDE independent) then you should debug with the maven jetty plugin. Basically you run the app as "mvn jetty:run" on the commandline it all just works without having to do any redeployment. Most good IDEs how have maven support built in and lets you run/debug the app as a maven; meaning that maven is run which starts the jetty plugin which starts the app and you can debug it. Since everything is running out of the IDE source and bin folders you don't even need a jetty server install.
Here is a demo project which runs that way https://github.com/simbo1905/ZkToDo2/blob/master/commandline.build.and.run.txt and here is how to run it under eclipse https://github.com/simbo1905/ZkToDo2/blob/master/eclipse.indigo.build.and.debug.txt but any IDE which understands maven should work. Take a look at the pom.xml where it sets up the maven jetty plugin.
I would use Gradle and scan the build output folder every few seconds for changes in the build.
In a build.gradle file:
apply plugin: 'jetty'
...
jettyRun.doFirst {
// set system properties, etc here for bootstrapping
}
jettyRun {
httpPort = 8989
reload = 'automatic'
scanIntervalSeconds = 3
daemon = false
}
That's it. You can choose to have the IDE auto-build for you, and point at that directory. But you can also choose not to. This solution is not tied at all to an IDE.
I thought I'd update with what I now do. I've written a tiny command line app / Maven archetype which works like how I thought this all should have in the first place. The bootstrap app lets you launch your servlet container of choice (Jetty, Tomcat, GlassFish) by just passing it the path to the WAR and your port.
Using Maven, you can create and package your own instance of this simple app:
mvn archetype:generate \
-DarchetypeGroupId=org.duelengine \
-DarchetypeArtifactId=war-bootstrap-archetype \
-DarchetypeVersion=0.2.1
Then you launch it like this:
java -jar bootstrap.jar -war myapp.war -p 8080 --jetty
Here's the source for the utility and the archetype: https://bitbucket.org/mckamey/war-bootstrap

Visual Studio basicHttpBinding and endpoint problems

I have a WPF application in VS 2008 with some web service references. For varying reasons (max message size, authentication methods) I need to manually define a number of settings in the WPF client's app.config for the service bindings.
Unfortunately, this means that when I update the service references in the project we end up with a mess - multiple bindings and endpoints. Visual Studio creates new bindings and endpoints with a numeric suffix (ie "Service1" as a duplicate of "Service"), resulting in an invalid configuration as there may only be a single binding per service reference in a project.
This is easy to duplicate - just create a simple "Hello World" ASP.Net web service and WPF application in a solution, change the maxBufferSize and maxReceivedMessageSize in the app.config binding and then update the service reference.
At the moment we are working around this by simply undoing checkout on the app.config after updating the references but I can't help but think there must be a better way!
Also, the settings we need to manually change are:
<security mode="TransportCredentialOnly">
<transport clientCredentialType="Ntlm" />
</security>
and:
<binding maxBufferSize="655360" maxReceivedMessageSize="655360" />
We use a service factory class so if these settings are somehow able to be set programmatically that would work, although the properties don't seem to be exposed.
Create a .Bat file which uses svcutil, for proxygeneration, that has the settings that is right for your project. It's fairly easy. Clicking on the batfile, to generate new proxyfiles whenever the interface have been changed is easy.
The batch can then later be used in automated builds. Then you only need to set up the app.config (or web.config) once. We generally separate the different configs for different environments, such as dev, test prod.
Example (watch out for linebreaks):
REM generate meta data
call "SVCUTIL.EXE" /t:metadata "MyProject.dll" /reference:"MyReference.dll"
REM making sure the file is writable
attrib -r "MyServiceProxy.cs"
REM create new proxy file
call "SVCUTIL.EXE" /t:code *.wsdl *.xsd /serializable /serializer:Auto /collectionType:System.Collections.Generic.List`1 /out:"MyServiceProxy.cs" /namespace:*,MY.Name.Space /reference:"MyReference.dll"
:)
//W
Rather than changing the generated endpoint, uou could add a second endpoint and binding definition with the configuration you need, then in your code just put the name of the new endpoint in your service client constructor.
Somehow I prefer using svcutil.exe directly than to use the "Add Service Reference" feature of Visual Studio :P This is what we're doing on our WCF projects.
I take your point, svcutil is definetly the more advanced way of adding and updating service references. Its just a fair bit more manual work when "right click, update reference" is so close to just working in a single step.
I guess we could create some batch files or something to just output the reference code. Even then, manually checking out and updating the service code with svcutil will probably be more work than just undoing the check out on the config.
Thanks for the advice in any case.
What we do is we check out (from source control) the app.config and *.cs files that are autogenerated by the svcutil.exe utility, then we run a batch file that runs svcutil.exe to retrieve the service metadata. When it's done, we recompile the code, make sure it works, then check the updated app.config and *.cs files back in. It's a whole lot more reliable than using the oft-buggy "Add Service Reference" with Visual Studio.

Resources