If UDP is a connectionless protocol, then why does UDPConn have a Close method? The documentation says "Close closes the connection", but UDP is connectionless. Is it good practice to call Close on a UDPConn object? Is there any benefit?
http://golang.org/pkg/net/#UDPConn.Close
Good Question, let's see the code of udpconn.Close
http://golang.org/src/pkg/net/net.go?s=3725:3753#L124
func (c *conn) Close() error {
if !c.ok() {
return syscall.EINVAL
}
return c.fd.Close()
}
Closes c.fd but what is c.fd ?
type conn struct {
fd *netFD
}
ok is a netFD net File Descriptor. Let's look at the Close method.
func (fd *netFD) Close() error {
fd.pd.Lock() // needed for both fd.incref(true) and pollDesc.Evict
if !fd.fdmu.IncrefAndClose() {
fd.pd.Unlock()
return errClosing
}
// Unblock any I/O. Once it all unblocks and returns,
// so that it cannot be referring to fd.sysfd anymore,
// the final decref will close fd.sysfd. This should happen
// fairly quickly, since all the I/O is non-blocking, and any
// attempts to block in the pollDesc will return errClosing.
doWakeup := fd.pd.Evict()
fd.pd.Unlock()
fd.decref()
if doWakeup {
fd.pd.Wakeup()
}
return nil
}
Notice all the decref
So to answer your question. Yes. Is good practice or you will leave hanging around in memory network file descriptors.
Related
I made a roundtrip in client, and I set a retry for 1 time. Question is sometimes my client triggered "context canceled", and entire processing seems like end in 300ms, but I didn't set a timeout. how is it triggered?
func RoundTrip(req *http.Request)(res *http.Response, err error){
for i:=0; i<1; i++ {
transport = http.DefaultTransport
res, err = transport.RoundTrip(req)
if err == nil {
break;
}
}
return
}
You can add the context to the request with the Request.WithContext method or NewRequestWithContext function. When you cancel the context it should propagate to all functions handling the request.
Update
Sorry I thought you wanted to cancel the request, not asking why. Anyhow given your comment on this answer:
but where canceled the context in 300ms
300ms is pretty short. If it was longer I would recommend adjusting the timers in net.DefaultTransport; however, instead I would guess that the underlying TCP connection is being refused/limited and causing the RoundTrip to fail. TCP limit could be something internal (DNS lookup error, etc) or on the server side. You would need to scope the problem more to debug.
I'm implementing a Thrift client in order to make connection to a built-in scribe server.
Everything is going OK if I use a standard Log method, like this:
public boolean log(List<LogEntry> messages) {
boolean ret = false;
PooledClient client = borrowClient();
try {
if ((client != null) && (client.getClient() != null)) {
ResultCode result = client.getClient().Log(messages);
ret = (result != null && result.equals(ResultCode.OK));
returnClient(client);
}
} catch (Exception ex) {
logger.error(LogUtil.stackTrace(ex));
invalidClient(client);
}
return ret;
}
However, when I use send_Log instead:
public void send_Log(List<LogEntry> messages) {
PooledClient client = borrowClient();
try {
if ((client != null) && (client.getClient() != null)) {
client.getClient().send_Log(messages);
returnClient(client);
}
} catch (Exception ex) {
logger.error(LogUtil.stackTrace(ex));
invalidClient(client);
}
}
It acctually causes some problems:
Total network connection to port 1463 (default port for a scribe server) is going to increase so much, and always in a CLOSE_WAIT state.
Cause my application got stuck without throwing any error, I think it may be an issue with network connection.
what if send without recv
As this is clearly TCP, the sender will block (in blocking mode), or incur EAGAIN/EWOULDBLOCK in non-blocking mode. EDIT It is now clear that you want to send without receiving the reply. You can do that by just sending and then closing the socket, but that may cause the peer to incur ECONNRESET, which may upset it. You should really implement the application protocol correctly.
1/ Total network connection to port 1463 (default port for a scribe server) is going to increase so much, and always in a CLOSE_WAIT state.
Lots of ports in CLOSE_WAIT state indicates a socket leak on the part of the local application.
2/ Cause my application got stuck without throwing any error. I think it may be an issues with network connection.
It is an issue with sending and not receiving.
Since you labelled this as a Thrift related question, the answer is oneway.
service foo {
oneway void FireAndForget(1: some args)
}
The oneway keyword does exactly what the name suggests. You get a client implementation that only sends and does not wait for anything to be returned from the server. This rule also includes exceptions. Hence a oneway method must always be void and can't throw any exceptions.
However, when I use send_Log instead ...
client.getClient().send_Log(messages);
Neither one of the Thrift-generated send_Xxx and recv_Xxx methods are meant to be public. That's why they are usually either private or protected methods. They should not be called directly, unless you are sure that you know what you are doing (and very obviously the latter is not the case here).
And since the real question is about performance: Why don't you just delegate the call(s) into a secondary thread? That way the I/O will not block the UI.
I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));
I want to assign specific information for the server's characters and as well as the client's characters. Now, how do I know if the player is the host or the client? I tried using isServer and isClient, but it both return true. Are these the correct keywords that I should use?
void Update () {
if(isServer){
Debug.Log("I'm the server");
}
if(isClient){
Debug.Log("I'm the client");
}
}
If you're connecting as a "host", you're actually both the "client" and "server" at the same time. This is in contrast to running a "dedicated server", which acts as the server authority, but doesn't represent a "client" connection. Like you suggest in your own answer, you can use isServer and !isServer, or probably:
void Update() {
if (isServer) {
Debug.Log("I'm the server (or host)");
} else {
Debug.Log("I'm the client");
}
}
Instead of using isClient to determine if player is the client, i use !isServer instead.
void Update () {
if(isServer){
Debug.Log("I'm the server");
}
if(!isServer){
Debug.Log("I'm the client");
}
}
Not sure if this applies to every situation, so I apologize if it does not - I am using a plugin called NATTraversal for Unity, and I was having a similar issue. I needed to find which connection is the host. However for me, since I am not using the relay servers (this is for you guys who are avoiding the relay) I found that I can do this check..
using UnityEngine.Networking;
void Start(){
if(NetworkServer.connections.Count > 0){
Debug.Log("This is the host.");
} else {
Debug.Log("This is a client.");
}
}
This works in my scenario because the client's connection list is empty, but the host's is not. There very well may be a better way to do this, but I didn't know of one without a previous built list of NetworkIdentity's.
The Network.isServer bool always returns false for me, so this is how I got around it. Hopefully it helps someone out there.
Edit: (Adding crucial information)
Please note, that this is AFTER matchmaking and connections have been established.
Another way to do it I have found is by listening to OnServerConnect in the NATLobbyManager.
public override void OnServerConnect(NetworkConnection conn){ }
That event only triggers for the host with the NATTraversal plugin, more info for anyone who may come across this while trying to figure all this stuff out. :)
In Go, a TCP connection (net.Conn) is a io.ReadWriteCloser. I'd like to test my network code by simulating a TCP connection. There are two requirements that I have:
the data to be read is stored in a string
whenever data is written, I'd like it to be stored in some kind of buffer which I can access later
Is there a data structure for this, or an easy way to make one?
No idea if this existed when the question was asked, but you probably want net.Pipe() which provides you with two full duplex net.Conn instances which are linked to each other
EDIT: I've rolled this answer into a package which makes things a bit simpler - see here: https://github.com/jordwest/mock-conn
While Ivan's solution will work for simple cases, keep in mind that a real TCP connection is actually two buffers, or rather pipes. For example:
Server | Client
---------+---------
reads <=== writes
writes ===> reads
If you use a single buffer that the server both reads from and writes to, you could end up with the server talking to itself.
Here's a solution that allows you to pass a MockConn type as a ReadWriteCloser to the server. The Read, Write and Close functions simply proxy through to the functions on the server's end of the pipes.
type MockConn struct {
ServerReader *io.PipeReader
ServerWriter *io.PipeWriter
ClientReader *io.PipeReader
ClientWriter *io.PipeWriter
}
func (c MockConn) Close() error {
if err := c.ServerWriter.Close(); err != nil {
return err
}
if err := c.ServerReader.Close(); err != nil {
return err
}
return nil
}
func (c MockConn) Read(data []byte) (n int, err error) { return c.ServerReader.Read(data) }
func (c MockConn) Write(data []byte) (n int, err error) { return c.ServerWriter.Write(data) }
func NewMockConn() MockConn {
serverRead, clientWrite := io.Pipe()
clientRead, serverWrite := io.Pipe()
return MockConn{
ServerReader: serverRead,
ServerWriter: serverWrite,
ClientReader: clientRead,
ClientWriter: clientWrite,
}
}
When mocking a 'server' connection, simply pass the MockConn in place of where you would use the net.Conn (this obviously implements the ReadWriteCloser interface only, you could easily add dummy methods for LocalAddr() etc if you need to support the full net.Conn interface)
In your tests you can act as the client by reading and writing to the ClientReader and ClientWriter fields as needed:
func TestTalkToServer(t *testing.T) {
/*
* Assumes that NewMockConn has already been called and
* the server is waiting for incoming data
*/
// Send a message to the server
fmt.Fprintf(mockConn.ClientWriter, "Hello from client!\n")
// Wait for the response from the server
rd := bufio.NewReader(mockConn.ClientReader)
line, err := rd.ReadString('\n')
if line != "Hello from server!" {
t.Errorf("Server response not as expected: %s\n", line)
}
}
Why not using bytes.Buffer? It's an io.ReadWriter and has a String method to get the stored data. If you need to make it an io.ReadWriteCloser, you could define you own type:
type CloseableBuffer struct {
bytes.Buffer
}
and define a Close method:
func (b *CloseableBuffer) Close() error {
return nil
}
In majority of the cases you do not need to mock net.Conn.
You only have to mock stuff that will add time to your tests, prevent tests from running in parallel (using shared resources like the hardcoded file name) or can lead to outages (you can potentially exhaust the connection limit or ports but in most of the cases it is not a concern, when you run your tests in isolation).
Not mocking has an advantage of more precise testing of what you want to test with a real thing.
https://www.accenture.com/us-en/blogs/software-engineering-blog/to-mock-or-not-to-mock-is-that-even-a-question
Instead of mocking net.Conn, you can write a mock server, run it in a goroutine in your test and connect to it using real net.Conn
A quick and dirty example:
port := someRandomPort()
srv := &http.Server{Addr: port}
go func(msg string) {
http.HandleFunc("/hello", myHandleFUnc)
srv.ListenAndServe()
}
myTestCodeUsingConn(port)
srv.Shutdown(context.TODO())