I have a requirement to add a new send port to a send port group in my BizTalk 2013 application. The send port should send to a File location with the destination folder set to \\ResearchServer\ResearchTeam\R&DReports
Although I can set the file handler settings to this path I can't save the send port. This is because BizTalk automatically generates a URI of \\ResearchServer\ResearchTeam\R&DReports which contains the invalid character & (ampersand).
Obviously one way to "fix" this would be to rename the folder but this would have serious consequences for other apps and users who access this folder (and is, quite frankly, avoiding the issue rather than finding a fix).
I've also considered creating a dynamic port and then setting the destination at runtime. However this seems extremely complicated and would require downtime on a live server.
I've looked on TechNet and StackOverflow for ideas and it seems I'm the first person to encounter any issues with ampersands in BizTalk file destinations.
Any ideas, please help!
Related
I have two websites running on IIS 7. Both require SSL. Ports for the websites are http:8080/https:443 and http:8087/https:443 respectively. I've created self signed certificate and put them into the Trusted Root. Contents of the both websites are the same. Here are my questions:
Do I have to make some changes to the hosts file as well? If so, what changes exactly, both on
server and clients
What do I have to type in the address bar in order to be able to open them? (Like 172.16.10.1/website1?) Do I have to specify the port numbers?
For http traffic, you can have many websites which can differ by IP or Port or Host Headers or a combination.
So in your case it is simple. For website1, you have site binding on port 8080, so the url becomes http://172.16.10.1:8080. Ditto for website2: http://172.16.10.1:8087 .
To make things simple, you can do a sitebinding on host header. So, bind the IP 172.16.10.1 with default port 80 to a host header say "www.website1.com" for the first website. Simlary for the other make the same combination bind to "www.website2.com". Now you don't need to specify port in the url. You can simply open both the websites by their respective names.
However, in case of https, it becomes a bit tricky. The certificates are installed on a per server basis. So, you have to specify different IP-Port combinations and host header binding won't work.
One option you have is to use a wildcard certificate which you can then secure-bind to each host-header.
The other option is to get a SAN Certificate (Subject Alternative Name Certificate). This will allow you to do a binding on different host headers with the same IP-port combination.
This excellent article on MSDN will help you understand it better: http://blogs.msdn.com/b/varunm/archive/2013/06/18/bind-multiple-sites-on-same-ip-address-and-port-in-ssl.aspx
Regarding the first part of your question:
You don't need to do anything with the hosts file. If you have a proper third-party certificate, it only needs to be registered on the server. The Intermediate and Trusted roots are already available on the clients. So nothing to be done on the client-side. You can open up "options" in IE and then check "certificates" under the "content" tab to see that a list of publishers is already there.
However, if you are using a self-cert, then the client-part is tricky. Because, the clients will keep on getting the "certificate is invalid" warning every time. One way out of this is to manually install the certificate on each client. Another way is to deploy the certificates to all clients using group policy.
I am developing a C# ASP.NET 4.0 application that will reside on a Windows Server 2003. By mean of accessing this application through a network computer, any user would be able to upload files to the windows server. But also, once these files are stored on server, he/she would be able to copy these files from the windows server to another networked computer.
I have found a way to upload files to a specified location on the server disk,
but now I need to send these files that are on server disk to the client computers.
My question is: Is there any way to send or copy files from server to other client computers (not the one that is accessing the web service) without needing a program recieving those files on the client computers? FTP, WCF, cmd commands, sockets?
Any idea?
If you want users of your webapp to download files, I'd look into an "ashx generic handler." It will allow you to send files back down to clients over HTTP(s).
If you are looking to have remote users, tell your webserver to copy files to other servers ON THE SAME LAN AS THE SERVER, you would write using normal System.IO operations.
Over a LAN, if you have the correct permissions and so on, you can write to a disk on a different machine using File.Copy -- there's nothing special about that.
If we're talking about remote machines over the internet, that's a different story. Something has to be listening whether it's FTP, WCF, DropBox, etc.
If the problem is that it can be painful to get something like WCF to work from a client due to problems like firewall issues under Windows 7, you could take a different route and have the client periodically ping the server looking for new content. To give the server a point of reference, the ping could contain the name or creation date of the most recent file received. The server could reply with a list of new files, and then the client could make several WCF calls, one by one, to pull the content down. This pattern keeps all the client traffic outbound.
You can if you can run the program as an account that has access to that computer. However having this sort of access on your network that would grant access to the outside world to put an unfiltered file on your internal network is just asking to be hacked.
Finally, I decided to install a FileZilla FTP server on each client computer and my page is working very well. But another option is to create a work group in the windows server and put every client computer to work in this work group, so that Windows server have access to the computers in the same work group.
Here are some links that may help to create the work groups:
http://helpdeskgeek.com/networking/cannot-see-other-computers-on-network-in-my-network-places/
http://www.computing.net/answers/windows-2003/server-2003-workgroup-setup-/1004.html
I have an ASP.NET application that I use to read the contents of a web page by a HttpWebRequest frequently. There's no problem with the remote address and my application is always working fine.
While I don't change anything, sometimes (about once a day) I get this error:
the remote name could not be resolved.
Why a previously resolved DNS name sometimes fails to be resolved?
The intermittent nature of this is going to be extremely difficult to resolve and it's going to take a configuration change instead of a code solution. (hint: read everything ;)
I would guess that the remote servers DNS is set to expire pretty often. Probably daily or maybe even every 12 hours or so. This is the TTL (time to live) setting. Admins sometimes set this to an artificially low level if they need the ability to quickly move the site to a new server.
You can determine how often it expires by going to a command prompt and running:
nslookup
set debug
www.theserverdomain.com
At the top of this will be a section that says "AUTHORITY RECORDS:" with an item under it that says "ttl".
Now, (and I'm making an educated guess here), what's probably happening when you query your DNS server to resolve that host name your server will have this value cached.
However, once it expires the your server will have to contact another server upstream to get the ip address resolution, called DNS forwarding. If there are a lot of hops between yours and the remote server OR if one of the DNS servers between the sites is overloaded then it could timeout and send back the message you are receiving.
If this is true then the ONLY thing you can do is hardcode the DNS and IP address combination in your web servers hosts file. This is usually at C:\Windows\System32\drivers\etc and is a file named "hosts". There is an example on how to properly edit this within the file itself.
Once you create the host mapping in that file, your web server will no longer have to contact the DNS server to perform name resolution and it won't matter what the TTL is set to.
The only danger here is if they move the web site to a new IP address. At which point you could simply update your hosts file again...
The first thing I would check is if DNS is no longer correctly configured or malfunctioning.
Try (from a Windows command line)
nslookup MyDnsNameHere
and see if you get the IP you would expect.
I'm thinking about configuring the remind calendar program so that I can use the same .reminders file from my Ubuntu box at home and from my Windows box at work. What I'm going to try to do is to make the directory on my home machine that contains the file externally visible through webdav on Apache. (Security doesn't really concern me, because my home firewall only forwards ssh, to hit port 80 my my home box, you need to use ssh tunneling.)
Now my understanding is that webdav was designed to arbitrate simultaneous access attempts. My question is whether this is compatible with direct file access from the host machine. That is, I understand that if I have two or more remote webdav clients trying to edit the same file, the webdav protocol is supposed to provide locking, so that only one client can have access, and hence the file will not be corrupted.
My question is whether these protections will also protect against local edits going through the filesystem, rather than through webdav. Should I mount the webdav directory, on the host machine, and direct all local edits through the webdav mount? Or is this unnecessary?
(In this case, with only me accessing the file, it's exceedingly unlikely that I'd get simultaneous edits, but I like to understand how systems are supposed to work ;)
If you're not accessing the files under the WebDAV protocol, you're not honoring locks set via LOCK and UNLOCK methods and therefore will open to potential to overwrite changes made by another client. This situation is described in the WebDAV RFC here: https://www.rfc-editor.org/rfc/rfc4918#section-7.2
I need to send files using the BizTalk 2006 (non wcf) FTP adapter. After successful transmission of each file it needs to be renamed from an "A" prefix to a "U" prefix. I don't want to issue a command using wildcards because I can't be sure of other files in the destination folder.
Any ideas?
Thanks
Rob.
I am guessing that you are doing this to avoid having the file picked up by anothe rprocess while it's in transit on the wire. You have two opetions. First, in the After Put property of the FTP Send Port, type in the rename commangd. Option Two, use the Temporary Folder propery of the FTP Send Port. This temp folde ron the FTP site is where the file will be deposited during transfer. After it's all there, the file will be moved to the destination. The temporary folder will also allow you to recover from transfer failures where a connection might be lost.