How can I make a java program, which contains a method main(String[] args) and must synchronize to execute it in windchill (8). This will allow me to retrieve data windchill such as life cycles for example?
You have 2 ways to do this.
Using a pure java application, calling Windchill services with RMI call.
This is really the hard way, as you have to manage the classes dependencies, and many other issues.
The easiest way is to use an HTTP Client and call Infoengine tasks, either by using pure HTTP calls, or WebServices.
Any Infoengine task can be called with an URL like this one :
http://server/Windchill/servlet/IE/tasks/MyTask.xml
Related
We have a requirement where we need to send .avro file as an input request to our API's. Really stuck at this point. If any detail example provided would be more appreciated.
Just use Java interop: https://github.com/intuit/karate#calling-java
You need to write a helper (start with a static method) to convert JSON to Avro and vice versa. I know teams using this for gRPC. Read this thread for tips: https://github.com/intuit/karate/issues/412
Also there is even a "karate-grpc" project: https://github.com/pecker-io/karate-grpc
Also see:
https://twitter.com/KarateDSL/status/1128170638223364097
https://twitter.com/KarateDSL/status/1417023536082812935
Is it possible to call Test Library constructor from Robot Framework?
Using Remote library interface (NRobot.Server) to connect from RF to Test Library (implemented in C#).
Currently its exposing all public methods implemented under Test Library except constructors.
There are multiple Test Libraries in our project where some functionality implemented as part of constructors.
Hence need a way to call constructor as a test step to execute certain functionality whenever required.
If not possible then may need to move functionality from constructors to new public methods. But want to avoid that if possible.
Thanks in advance...
In short - no.
When calling a remote library, you're actually just the client in an XML-RPC comm protocol; it is the server's responsibility to have the library instantiated, so it (the very same library) can process your instructions and act as needed. Thus normally the library is already instantiated when you call it from your RF code - too late to invoke its constructor.
Naturally, this can be implemented differently - for the remote library server to instantiate the target library on a (special) call, and thus you'll to be able to provide constructor arguments, but that is library design/code change required in it.
This is in contrast of using local libraries, where they are instantiated in your local interpreter, on their import.
I have a process which I will be invoking manually for the first time in prod environment. Thing is, the process stops when the server is down or if the server is stopped. In this scenario, I will not be able to invoke the process manually everytime since it will be in production environment and not feasible also. So i need to know how can i invoke a process automatically once the server is up?
Heard that one way is to write a custom component to start the process using livecycle implementation class.
Please let me know how to go about it?
Any help regarding this is much appreciated!
Thanks
There are at least two ways you can do this.
First is the custom component route. You invoke the process on component life-cycle start to ensure that the invocation happens every time your component is deployed.
Second is the servlet route. You invoke the process on the initialisation of the servlet making sure that the server started.
The servlet implementation is a better fit for purpose, the only downside is, you need to package and deploy it separately as it won't be a part of the LCAs.
You can find the code samples on how to invoke LC processes using APIs on adobe docs. You can use Java API, WS API or Rest, whichever you are more comfortable with.
http://help.adobe.com/en_US/livecycle/9.0/programLC/help/index.htm
I am very new to remoting in flex. I am using flex 4.5 and talking to a web application built by someone else on the team using AMF. They have used Zend_AMF to serialize and unserialize the data.
One of the main issues I am facing at the moment is that I will need to talk to a lot of services (about 60 or so).
From examples on remoting I have seen online and from adobe, it seems that I need to define a remoting object for EACH service:
<mx:RemoteObject id="testservice" fault="testservice_faultHandler(event)" showBusyCursor="true" destination="account"/>
With so many services, I think I might have to define about 60 of those, which I don't think is very elegant.
At the same time, I have been playing with Pinta to test out the AMF endpoint. Pinta seems to be able to allow one to define an arbitary amount of services, methods and parameters without any of these limitations. Digging through the source, I find that they have actually drilled down deep into the remoting and are handling a lot of low level stuff.
So, the question is, is there a way to approach this problem without having to define loads or remoteobjects and without having to go down too deep and start having to handling low level remoting events ourselves?
Cheers
It seems unusual for an application to require that many RemoteObjects. I've worked on extremely large applications, and we typically end up with no more than ~6-10 RemoteObject declarations.
Although you don't give a lot of specifics in your post about the variations of RemoteObjects, I suspect you may be confusing RemoteObject with Operation.
You typically declare a RemoteObject instance for every end-point in your application. However, that endpoint can (and normally does) expose many different methods to be invoked. Each of these server-side methods gets results in a client-side Operation.
You can explicitly declare these if you wish, however the RemoteObject builds Operations for you if you don't declare them:
var remoteObject:RemoteObject;
// creates an operation for the saveAccount RPC call, and invokes it,
// returning the AsyncToken
var token:AsyncToken = remoteObject.saveAccount(account);
token.addResponder(this);
//... etc
If you're interacting with a single server layer, you can often get away with a single RemoteObject, pointing to a single destination on the API, which exposes many methods. This is approach is often referred to as an API Façade, and can be very useful, if backed with a solid dependency injection discipline on the API.
Another common approach is to segregate your API methods by logical business area, eg., AccountService, ShoppingCartService, etc. This has the benefit of being able to mix & match protocols between services (eg., AccountService may run over HTTPS).
How you choose to split up these RemoteObjects is up to you. However, 60 in a single applications sounds a bit suspect to me.
I would like to pass an arrayList of objects to a servlet from a java program.
Can some one please tell me, how this can be done.
Look at this link they describe the process ind detail
http://www2.sys-con.com/ITSG/virtualcd/java/archives/0309/darby/index.html
Please note that if you are going to serialize objects back and forth that the compiled version must be in sync on both the client and the server or you will get errors. I would recommend converting your objects to either XML or JSON and then reading them from that on the server side. That way if you client and server code get out of sync it will still work.
For the client I would recommend Apache's HttpClient (or whatever they have renamed it to)
Have you considered using a web service framework for this instead of coding a naked servlet? The whole business might be about 10 lines of code using, for example, an Apache CXF JAX-RS service and client. If the objects are complex, you might want to use a full SOAP service.