I want use ActiveMQ in .net core,i use Apache.NMS.ActiveMQ for doing this but I have problem.
I see this error in ActiveMQ admin console:
Cannot display ObjectMessage body. Reason: Failed to build body from bytes. Reason: java.io.StreamCorruptedException: invalid stream header: 00010000
thats part of my code for doing this:
private const String QUEUE_DESTINATION = "test-queue";
private IConnection _connection;
private ISession _session;
public MessageQueue()
{
IConnectionFactory factory = new ConnectionFactory("tcp://localhost:61616?wireFormat.maxInactivityDuration=5000000");
_connection = factory.CreateConnection();
_connection.Start();
_session = _connection.CreateSession();
}
IDestination dest = _session.GetQueue(QUEUE_DESTINATION);
using (IMessageProducer producer = _session.CreateProducer(dest))
{
var objectMessage = producer.CreateObjectMessage(newDoc);
producer.Send(objectMessage);
}
The fact that the admin console can't display the body of an ObjectMessage isn't really an error. It's the expected behavior. Remember, from the broker's perspective the message body is just an array of bytes. It could be text data (encoded many different ways), image data, custom binary data, etc. The broker has no idea how to decode it. It will try to display the body as text, but if it fails it won't try anything else.
To be clear, in order to see the contents of an ObjectMessage the web console would have to have the object's definition in order to deserialize it. There is no mechanism to tell the web console about arbitrary data formats so it can deserialize message bodies reliably (other than simple text). This is one reason, among many, to avoid ObjectMessage.
I recommend you use a simple text format (e.g. JSON, XML) to represent your data and send that in your message rather than using ObjectMessage.
Related
I am attempting to have my Ethereum smart contract connect to an external HTTP endpoint using Chainlink. Following along with Chainlink's documentation (https://docs.chain.link/docs/advanced-tutorial/) I deployed this contract onto the Rinkeby testnet.
pragma solidity ^0.8.7;
import "github.com/smartcontractkit/chainlink/blob/develop/contracts/src/v0.8/ChainlinkClient.sol";
// MyContract inherits the ChainlinkClient contract to gain the
// functionality of creating Chainlink requests
contract getHTTP is ChainlinkClient {
using Chainlink for Chainlink.Request;
bytes32 private thisDoesNotWork;
address private owner;
address private ORACLE_ADDRESS = 0x718Cc73722a2621De5F2f0Cb47A5180875f62D60;
bytes32 private JOBID = stringToBytes32("86b489ec4d84439c96181a8df7b22223");
string private url = "<myHTTPAddressAsString>";
// This endpoint URL is hard coded in my contract, and stored as a string (as in the example code).
// I control it and can have it reply with whatever I want, which might be an issue, returning data in a format that the oracle rejects
uint256 constant private ORACLE_PAYMENT = 100000000000000000;
constructor() public {
// Set the address for the LINK token for the network
setPublicChainlinkToken();
owner = msg.sender;
}
function requestBytes()
public
onlyOwner
{
Chainlink.Request memory req = buildChainlinkRequest(JOBID, address(this), this.fulfill.selector);
req.add("get", url);
sendChainlinkRequestTo(ORACLE_ADDRESS, req, ORACLE_PAYMENT);
}
function fulfill(bytes32 _requestId, bytes32 recVal)
public
recordChainlinkFulfillment(_requestId)
{
thisDoesNotWork = recVal;
}
function cancelRequest(
bytes32 _requestId,
uint256 _payment,
bytes4 _callbackFunctionId,
uint256 _expiration
)
public
onlyOwner
{
cancelChainlinkRequest(_requestId, _payment, _callbackFunctionId, _expiration);
}
// withdrawLink allows the owner to withdraw any extra LINK on the contract
function withdrawLink()
public
onlyOwner
{
LinkTokenInterface link = LinkTokenInterface(chainlinkTokenAddress());
require(link.transfer(msg.sender, link.balanceOf(address(this))), "Unable to transfer");
}
modifier onlyOwner() {
require(msg.sender == owner);
_;
}
// A helper funciton to make the string a bytes32
function stringToBytes32(string memory source) private pure returns (bytes32 result) {
bytes memory tempEmptyStringTest = bytes(source);
if (tempEmptyStringTest.length == 0) {
return 0x0;
}
assembly { // solhint-disable-line no-inline-assembly
result := mload(add(source, 32))
}
}
}
I found a node on the Chainlink market (https://market.link/jobs/529c7194-c665-4b30-8d25-5321ea49d9cc) that is currently active on rinkeby (according to Etherscan it has been active within the past 3 days and presumably still working).
I deploy the contract and fund the contract with LINK. I call the requestBytes() function through remix and everything works as expected. Metamask pays the gas, the LINK is removed from my contract, I get a transaction hash, and no errors.
However, my endpoint never logs a request attempt, the oracle never lists a transaction on its Etherscan page, and my data is not present.
I have attempted to use other jobs from the Chainlink market with similar outcomes.
I have also attempted to use other HTTP endpoints, like the ones from the Chainlink examples, with similar outcomes, however I doubt this is the issue, since it appears the HTTP request is never even getting called (as referenced by the fact that my HTTP endpoint does not log the request)
Without an error message, and being new to Web3 dev, I am not sure where to start debugging. I found this comment on Github: https://github.com/smartcontractkit/documentation/issues/513 and implemented the suggestion here without luck.
I also found this: Chainlink - Job not being fulfilled but this was not helpful either.
My current considerations for where the error might be:
The oracles are whitelisted and reject my request outright. Have considered creating my own node but want to avoid if possible at this stage.
I have an type error in how I am formatting the request in my contract, like the example in the GitHub exchange I found and referenced above.
EDIT: I am also open to other options beyond Chainlink to connect my contract to an HTTP GET endpoint, if anyone has any suggestions. Thanks!
I've been working on something similar recently and would suggest you try using the kovan network and the oracle that chainlink has there. Even more specifically, I think it would be a good idea to confirm you can get it working using the api, oracle, and jobid listed in the example on that page you are following... here:
https://docs.chain.link/docs/advanced-tutorial/#contract-example
Once you get that example working, then you can modify it for your usage. The jobid in that tutorial is for returning a (multiplied) uint256... which, for your API, I think is not what you want as you are wanting bytes32 it sounds like... so when you try to use it with your API that returns bytes32 the jobid would be: 7401f318127148a894c00c292e486ffd as seen here:
https://docs.chain.link/docs/decentralized-oracles-ethereum-mainnet/
Another thing that might be your issue, is your api. You say you control what it returns... I think it might have to return a response in bytes format, like Patrick says in his response (and his comments on his response) here:
Get a string from any API using Chainlink Large Response Example
Hope this is helpful. If you cannot get the example in the chainlink docs to work, let me know.
I have the Hub which takes the file, saves into the private static property and later sends it file back to the caller user.
public class TestHub : Hub
{
private static string _file;
public async Task SendAudio(IAsyncEnumerable<string> stream)
{
var enumerator = stream.GetAsyncEnumerator();
await enumerator.MoveNextAsync();
_file = enumerator.Current;
}
public async IAsyncEnumerable<string> ReceiveFile([EnumeratorCancellation] CancellationToken cancellationToken)
{
yield return _file;
}
}
The problem occurs when I'm looking in the websocket panel.
The first red frame (send file) shows length is 57122 bytes.
The second red frame (receive file) shows length is 146515 bytes.
Why is the difference so great?
It looks like you're trying to send binary data. Json doesn't support binary data, instead you're supposed to base64 encode your data before giving it to Json, and on the server side you would either base64 decode it, or store it as a base64 blob. The reason you're seeing a difference here is that your client side is taking the bytes you gave it and just directly using them as their UTF8 value. However on the server side when it sends the same data back it will see that some of the UTF8 data isn't safe and will do some extra encoding to make sure it is safe, hence the different size.
If you want to avoid having to base64 encode your blobs, you can give the Message Pack protocol a try which supports byte[] directly. https://learn.microsoft.com/aspnet/core/signalr/messagepackhubprotocol?view=aspnetcore-3.1
Using BizTalk 2013r2 CU1, I have a created a property schema for my inbound xsd and deployed the application.
When I receive a sample xml document using a standard "xml receive" pipeline then I can see that the required element is promoted into the context as expected.
I then created a custom pipeline which contains the "XML disassembler" component in the "Disassemble" stage and a custom component in the "Validate" stage. This custom component needs to read the promoted property from the context. However, I find that when I switch the Receive Location from "xml receive" pipeline to my custom pipeline then my property does not get promoted. I am using the following code within my custom component to write out a list of items in the message context:
for (int x = 0; x < contextList.CountProperties; x++)
{
contextList.ReadAt(x, out name, out nspace);
string value = contextList.Read(name, nspace).ToString();
contextItems += "Name: " + name + " - " + "Namespace: " + nspace + " - " + value + "\r\n";
if (name == _ContextPropertyName && nspace == _ContextPropertyNamespace)
promotedPropFound = true;
}
Helpers.EventLogHelper eventHelper = new EventLogHelper();
eventHelper.LogEvent(string.Format("Context items:{0}", contextItems));
if (promotedPropFound == false)
throw new Exception(string.Format("Unable to find promoted property with name[{0}] and namespace [{1}]", _ContextPropertyName, _ContextPropertyNamespace));
From the output in the event log I can see that certain properties such as MessageType have been promoted but my custom property has not. Again, if I change the receive location back to use a standard "xml receive" pipeline then the property will be promoted from a copy of the same xml document (I check this by stopping the subscribing send port and viewing the context from the admin console).
I find this very strange since the same "XML disassembler" component is present in the same "Disassemble" stage of both pipelines, with the same (default)configuration. I'm starting to think perhaps there's a problem with 2013r2CU1 - has anyone else encountered the same?
By the time the XML Disassembler has executed in your custom pipeline, there is no guarantee that your properties have been promoted.
The incoming message arrives in the pipeline as a stream with the data pointer set at the start of the stream.
I think the XML Disassembler does not read the stream, it wraps it into some stream wrapper class that will populate the promoted properties when the stream actually gets read.
The stream will have to be read at least once: when the message gets inserted into the message box. So there is a guarantee that the properties will get promoted, but you cannot assume it will be done before the "Validate" stage executes.
To make sure this is really the problem your are encountering: check your message AFTER it has been imported into the message box.
If your promoted property is there, what I described is probably what is happening.
Solutions:
To make your custom pipeline component work, the best solution would be to do just as the XML Disassembler: get the incoming stream and wrap it into a stream wrapper class that can trigger whatever functionality you need.
The assembly Microsoft.BizTalk.Streaming.dll has some wrapper class that might interest you: ForwardOnlyEventingReadStream.
This class has an event AfterLastReadEvent. You can create some EventHandler and have it subscribe to this event to trigger your custom functionality only after the stream has been fully read., and all properties have been promoted.
Your custom component would look like that:
public IBaseMessage Execute(IPipelineContext context, IBaseMessage message)
{
Stream stream = message.BodyPart.GetOriginalDataStream();
CForwardOnlyEventingReadStream eventingReadStream = new CForwardOnlyEventingReadStream(stream);
eventingReadStream.AfterLastReadEvent += new AfterLastReadEventHandler(DoSomething);
message.BodyPart.Data = eventingReadStream;
return message;
}
private static void DoSomething(object src, EventArgs args)
{
}
A less efficient way to solve your problem would be to read the stream fully in your custom component at the "Validate" stage and put the stream pointer back to the start of the stream.
Microsoft has some guidelines for when you're manipulating the message stream in pipeline component:
https://msdn.microsoft.com/en-us/library/aa577699.aspx
Update:
OP needs to pass the message context to the Event Handler.
It is possible using a Lambda expression:
public IBaseMessage Execute(IPipelineContext context, IBaseMessage message)
{
Stream stream = message.BodyPart.GetOriginalDataStream();
CForwardOnlyEventingReadStream eventingReadStream = new CForwardOnlyEventingReadStream(stream);
eventingReadStream.AfterLastReadEvent += new AfterLastReadEventHandler((src, args) => DoSomething(src, args, message.Context));
message.BodyPart.Data = eventingReadStream;
return message;
}
private static void DoSomething(object src, EventArgs args, IBaseMessageContext messageContext)
{
}
This SO question can be interesting for reference for passing the additional parameter:
Pass parameter to EventHandler
Can you do whatever you had planned for the Validate Stage in an Orchestration? That would be much easier.
If not, the most common solution to this specific problem is an intermediate Pipeline Component that forces a full read on the stream, though technically, you'd only have to read until the Promoted node is hit.
I might be confused about something, but when I store a custom object from the Java Riak client and then try to read that object using the Python Riak client, I end up with a raw json string instead of a dict.
However, if I store a the object in python, I am able to output a python dictionary when fetching that object.
I could simply use a json library on the python side to resolve this, but the very fact that I am experiencing this discrepancy makes me think that I am doing something wrong.
On the Java side, this is my object:
class DocObject
{
public String status; // FEEDING | PERSISTED | FAILED | DELETING
public List<String> messages = new ArrayList<String>();
}
class PdfObject extends DocObject
{
public String url;
public String base_url;
}
This is how I am storing that object in Riak:
public void feeding(IDocument doc) throws RiakRetryFailedException {
PdfObject pdfObject = new PdfObject();
pdfObject.url = doc.getElement("url").getValue().toString();
pdfObject.base_url = doc.getElement("base_url").getValue().toString();
pdfObject.status = "FEEDING";
String key = hash(pdfObject.url);
pdfBucket.store(key, pdfObject).execute();
}
And this is what I am doing in Python to fetch the data:
# Connect to Riak.
client = riak.RiakClient()
# Choose the bucket to store data in.
bucket = client.bucket('pdfBucket')
doc = bucket.get('7909aa2f84c9e0fded7d1c7bb2526f54')
doc_data = doc.get_data()
print type(doc_data)
The result of the above python is:
<type 'str'>
I am expecting that to be <type 'dict'>, just like how the example here works:
http://basho.github.com/riak-python-client/tutorial.html#getting-single-values-out
I am perplexed as to why when the object is stored from Java it is stored as a JSON string and not as an object.
I would appreciate if anybody could point out an issue with my approach that might be causing this discrepancy.
Thanks!
It would appear you've found a bug in our Python client with the HTTP protocol/transport.
Both the version you're using and the current one in master are not decoding JSON properly. Myself and another dev looked into this this morning and it appears to stem from an issue with charset parameter being returned from Riak with the content-type as Christian noted in his comment ("application/json; charset=UTF-8")
We've opened an issue on github (https://github.com/basho/riak-python-client/issues/227) and will get this corrected.
In the mean time the only suggestion I have is to decode the returned JSON string yourself, or using the 1.5.2 client (latest stable from pypy) and the Protocol Buffers transport:
client = riak.RiakClient(port=8087, transport_class=riak.RiakPbcTransport)
it will return the decoded JSON as a dict as you're expecting.
I am attempting to develop a generic BizTalk application for configuring dynamic ports. I have an orchestration that pulls back all the configuration settings for each port and I want to loop through these settings and configure the ports. The settings are held in MSSQL and, for instance, two of the properties are PortName and Address. So from within the orchestration I would like to reference the port by the string variable PortName. So is there some way to get a collection of all the ports in an orchestration or reference a port via a string variable i.e. Port['MyPortName'](Microsoft.XLANGs.BaseTypes.Address) = "file://c:\test\out\%MessageId%.xml" Thanks
In order to dynamically configure Dynamic Logical Send Ports from within an orchestration, one has to store the settings into a persistent datastore (e.g. a database or configuration file) and implement a way to assign those properties dynamically at runtime.
But first, we need to understand what is happening when configurating a Dynamic Send Port.
How to Configure a Dynamic Logical Send Port
Configuring the properties of a dynamic logical send port from within an orchestration involves two steps:
First, the TransportType and target Address properties must be specified on the Send Port. This is usually done in an Expression Shape with code similar to this:
DynamicSendPort(Microsoft.XLANGs.BaseTypes.TransportType) = "FILE";
DynamicSendPort(Microsoft.XLANGs.BaseTypes.Address) = "C:\Temp\Folder\%SourceFileName%";
Second, any additional transport properties must be specified on the context of the outgoing message itself. Virtually all BizTalk adapters have additional properties that are used for the communication between the Messaging Engine and the XLANG/s Orchestration Engine. For instance, the ReceivedFileName context property is used to dynamically set a specific name for when the FILE adapter will save the outgoing message at its target location. This is best performed inside an Assignment Shape, as part of constructing the outgoing message:
OutgoingMessage(FILE.ReceiveFileName) = "HardCodedFileName.xml"
You'll notice that most configuration properties must be specified on the context of the outgoing messages, specifying a namespace prefix (e.g. FILE), a property name (e.g. ReceiveFileName) and, obviously, the value that gets assigned to the corresponding property.
In fact, all the context properties are classes that live Inside the well-known Microsoft.BizTalk.GlobalPropertySchemas.dll assembly. This is confirmed by looking up this assembly in Visual Studio's object explorer.
Even though most context properties that are necessary to configure Dynamic Logical Send Ports live Inside this specific assembly, not all of them do. For instance, the MSMQ BizTalk adapter uses a separate assembly to store its context properties. Obviously, third-party or custom adapters come with additionnal assemblies as well.
Therefore, in order to setup a context property on a Dynamic Send Port using a flexible approach like the one describe below, four pieces of information are necessary:
The fully qualified name of the assembly containing the context property classes.
The namespace prefix.
The property name.
The property value.
Storing Port Settings in a Persistent Medium
The following .XSD schema illustrate one possible structure for serializing port settings.
Once serialized, the specified context properties can then be stored in a SQL database or a configuration file very easily. For instance, here are the settings used as an example in this post:
A Flexible Approach to Configuring Dynamic Logical Send Ports
With a simple helper Library, setting up the dynamic port configuration is very easy. First, you have to retrieve the serialized settings from the persistent medium. This can easily be achieved using the WCF-SQL Adapter and a simple stored procedure.
Once retrieved, those properties can then be deserialized into a strongly-typed C# object graph. For this, first create a C# representation of the ContextProperties schema shown above, using the following command-line utility:
xsd.exe /classes /language:cs /namespace:Helper.Schemas .\ContextProperties.xsd
This generates a partial class that can be improved with the following method:
namespace Helper.Schemas
{
public partial class ContextProperties
{
public static ContextProperties Deserialize(string text)
{
using (MemoryStream stream = new MemoryStream())
{
byte[] buffer = Encoding.UTF8.GetBytes(text);
stream.Write(buffer, 0, buffer.Length);
stream.Seek(0, SeekOrigin.Begin);
return (ContextProperties)
Deserialize(
stream
, typeof(ContextProperties));
}
}
public static Object Deserialize(Stream stream, Type type)
{
XmlSerializer xmlSerializer = new XmlSerializer(type);
return xmlSerializer.Deserialize(stream);
}
}
}
Second, applying this configuration involves creating an XLANG/s message from code and setting up the context properties dynamically using reflection, based upon the description of the context property classes specified in the deserialized ContextProperties object graph.
For this, I use a technique borrowed from Paolo Salvatori's series of articles regarding dynamic transformations, which consists in creating a custom BTXMessage-derived class, used internally by the BizTalk XLANG/s engine.
namespace Helper.Schemas
{
using Microsoft.BizTalk.XLANGs.BTXEngine; // Found in Microsoft.XLANGs.BizTalk.Engine
using Microsoft.XLANGs.Core; // Found in Microsoft.XLANGs.Engine
[Serializable]
public sealed class CustomBTXMessage : BTXMessage
{
public CustomBTXMessage(string messageName, Context context)
: base(messageName, context)
{
context.RefMessage(this);
}
public void SetContextProperty(string assembly, string ns, string name, object value)
{
if (String.IsNullOrEmpty(ns))
ns = "Microsoft.XLANGs.BaseTypes";
if (String.IsNullOrEmpty(assembly))
assembly = "Microsoft.BizTalk.GlobalPropertySchemas";
StringBuilder assemblyQualifiedName = new StringBuilder();
assemblyQualifiedName.AppendFormat("{0}.{1}, {2}", ns, name, assembly);
Type type = Type.GetType(assemblyQualifiedName.ToString(), true, true);
SetContextProperty(type, value);
}
internal void SetContextProperty(string property, object value)
{
int index = property.IndexOf('.');
if (index != -1)
SetContextProperty(String.Empty, property.Substring(0, index), property.Substring(index + 1), value);
else
SetContextProperty(String.Empty, String.Empty, property, value);
}
}
}
Now, the last piece of the puzzle is how to make use of this custom class from within an Orchestration. This is easily done in an Assignment Shape using the following helper code:
namespace Helper.Schemas
{
using Microsoft.XLANGs.BaseTypes;
using Microsoft.XLANGs.Core; // Found in Microsoft.XLANGs.Engine
public static class Message
{
public static XLANGMessage SetContext(XLANGMessage message, ContextProperties properties)
{
try
{
// create a new XLANGMessage
CustomBTXMessage customBTXMessage = new CustomBTXMessage(message.Name, Service.RootService.XlangStore.OwningContext);
// add parts of the original message to it
for (int index = 0; index < message.Count; index++)
customBTXMessage.AddPart(message[index]);
// set the specified context properties
foreach (ContextPropertiesContextProperty property in properties.ContextProperty)
customBTXMessage.SetContextProperty(property.assembly, property.#namespace, property.name, property.Value);
return customBTXMessage.GetMessageWrapperForUserCode();
}
finally
{
message.Dispose();
}
}
}
}
You can use this static method inside your Assignment Shape like the code shown hereafter, where OutboundMessage represents the message which you want to set the context:
OutboundMessage = Helper.Schemas.Message.SetContext(OutboundMessage, contextProperties);
In the first place you shouldn't attempt to do configuration changes like this using an Orchestration. Technically it's feasible to do what you are attempting to do, but as a practice you shouldn't mix up your business process with administration.
The best way to do such things will be by either writing some normal scripts or PowerShell.
To answer you question, you can get the data you want from BtsOrchestration class in ExplorerOM
http://msdn.microsoft.com/en-us/library/microsoft.biztalk.explorerom.btsorchestration_members(v=bts.20)