How should be the request import logic between system? - sap-gui

I'm changing a report source code in DEV system and transporting request to QA system to test the report. Assume that I have many requests include only one object that I've transported from DEV to QA for testing the report after making changes in same program.
Requests imported to QA are like:
1037 ABDK923094 400 MTOK Z_SAMPLE BUG_FIX 1
1038 ABDK923098 400 MTOK Z_SAMPLE BUG_FIX 2
1039 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 3
1040 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 4
1041 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 5
1042 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 6
1043 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 7
1044 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 8
1045 ABDK923100 400 MTOK Z_SAMPLE BUG_FIX 9
They all have one same object;
LIMU REPS Z_SAMPLE
Now I want to transport all the requests to Production system from QA.
Should I transport all the request one by one from QA to Production or delete all the requests but the last one and transport only the last request to PROD (for clearing the unnecessary requests)?
Which one is more appropriate ?

In your very unusual specific case.
LIMU REPS <PROG> ONLY in N transports.
Then importing the last transport only would be sufficient.
But you need to remove the other transports from the buffer to avoid later out of
sequence import resetting to an earlier version.
Importing ALL of these transports is best practice.
R3TRANS will sort Transport content during an import ALL to get the net effect.
That only Import last only would have done. The production system doesnt have n interim states when using Import ALL.
Importing the transports individually in the order that they are in the queue is also OK,
but you have the issue about the production state of the program until the last import is committed.
If in Doubt use IMPORT ALL.
Of course Import ALL will also import other transports in the queue.
So you must make sure all transports in the queue are tested and ready to go to production.
Importing transports out of order means you must delete the other transports or deal with the issue of out of sequence imports. A recipe for problems waiting to happen.
I have a rule, Only GOD is allowed to import out of sequence.
If you are that good and know how R3Import works, and ALL the implications, then go ahead import out of sequence and/or ignore/delete transports.
If not GOD like knowledge then be prepared for Problems.

Related

How do I force HTTP connections to use IPv4 instead of IPv6 in Python?

When attempting to connect to the Binance REST API from a Windows 11 computer, I got the following error:
ccxt.base.errors.ExchangeError: binanceus {"code":-71012,"msg":"IPv6 not supported"}
In this particular example I am using the ccxt Python library, but I assume the problem exists for any HTTP connection / library since the error comes from Binance. Here's sample code that replicates the issue on Windows 11 (works fine on a Linux machine):
import ccxt
exchange = ccxt.binanceus({'apiKey': '<your_key>', 'secret': '<your_secret>'})
print(exchange.fetch_balance())
I speculate that this is happening because by default in Windows 11 IPv6 has a higher priority than IPv4:
> netsh interface ipv6 show prefixpolicies
Querying active state...
Precedence Label Prefix
---------- ----- --------------------------------
50 0 ::1/128
40 1 ::/0
35 4 ::ffff:0:0/96
30 2 2002::/16
5 5 2001::/32
3 13 fc00::/7
1 11 fec0::/10
1 12 3ffe::/16
1 3 ::/96
I suppose one solution is to change this priority so IPv4 is used by default, but I don't want to do this, and I also prefer the code to work everywhere.
Under the hood, the ccxt library uses the Python requests library, and allows the user to construct a custom Session object.
I came up with the following hack/workaround which works for me
Is there a cleaner/better/more robust way to achieve the same thing?
import ccxt
import requests
from requests_toolbelt.adapters import source
import socket
def get_ipv4_session():
local_ipv4 = socket.gethostbyname(socket.gethostname())
src = source.SourceAddressAdapter(local_ipv4)
session = requests.Session()
session.mount("https://", src)
return session
exchange = ccxt.binanceus({
'apiKey': '<your key>',
'secret': '<your secret>',
'session': get_ipv4_session()
})
print(exchange.fetch_balance())
I read the answers here as well but they seem more complicated: Force requests to use IPv4 / IPv6

Are TCP/UDP IP packets with a source port below 1024 possible

I am analyzing some events against dns servers running unbound. In the course of this investigation I am running into traffic involving queries to the dns servers that are reported as having in some cases a source port between 1 and 1024. As far as my knowledge goes these are reserved for services so there should never be traffic originating / initiated from those to a server.
Since I also know this is a practice, not a law, that evolved over time, I know there is no technical limitation to put any number in the source port field of a packet. So my conclusion would be that these queries were generated by some tool in which the source port is filled with a random value (the frequency is about evenly divided over 0-65535, except for a peak around 32768) and that this is a deliberate attack.
Can someone confirm/deny the source port theory and vindicate my conclusion or declare me a total idiot and explain why?
Thanks in advance.
Edit 1: adding more precise info to settle some disputes below that arose due to my incomplete reporting.
It's definitely not a port scan. It was traffic arriving on port 53 UDP and unbound accepted it apparently as an (almost) valid dns query, while generating the following error messages for each packet:
notice: remote address is <ipaddress> port <sourceport>
notice: sendmsg failed: Invalid argument
$ cat raw_daemonlog.txt | egrep -c 'notice: remote address is'
256497
$ cat raw_daemonlog.txt | egrep 'notice: remote address is' | awk '{printf("%s\n",$NF)}' | sort -n | uniq -c > sourceportswithfrequency.txt
$ cat sourceportswithfrequency.txt | wc -l
56438
So 256497 messages, 56438 unique source ports used
$ cat sourceportswithfrequency.txt | head
5 4
3 5
5 6
So the lowest source port seen was 4 which was used 5 times
$ cat sourceportswithfrequency.txt | tail
8 65524
2 65525
14 65526
1 65527
2 65528
4 65529
3 65530
3 65531
3 65532
4 65534
So the highest source port seen was 65534 and it was used 4 times.
$ cat sourceportswithfrequency.txt | sort -n | tail -n 25
55 32786
58 35850
60 32781
61 32785
66 32788
68 32793
71 32784
73 32783
88 32780
90 32791
91 32778
116 2050
123 32779
125 37637
129 7077
138 32774
160 32777
160 57349
162 32776
169 32775
349 32772
361 32773
465 32769
798 32771
1833 32768
So the peak around 32768 is real.
My original question still stands: does this traffic pattern suggest an attack or is there an logical explanation for, for instance, the traffic with source ports < 1024?
As far as my knowledge goes these are reserved for services so there should never be traffic originating / initiated from those to a server.
It doesn't matter what the source port number is, as long as it's between 1 and 65,535. It's not like a source port of 53 means that there is a DNS server listening on the source machine.
The source port is just there to allow multiple connections / in-flight datagrams from one machine to another machine on the same destination port.
See also Wiki: Ephemeral port:
The Internet Assigned Numbers Authority (IANA) suggests the range 49152 to 65535 [...] for dynamic or private ports.[1]
That sounds like a port scan.
There are 65536 distinct and usable port numbers. (ibid.)
FYI: The TCP and UDP port 32768 is registered and used by IBM FileNet TMS.

phpcs reported errors with note phpcbf can fix these marked errors but phpcbf claimed no fixable errors found

.../vendor/squizlabs/php_codesniffer/phpcs --standard=PSR2 myfile.php
The report is
FOUND 10 ERRORS AND 5 WARNINGS AFFECTING 15 LINES
...
236 | ERROR | [x] Line indented incorrectly; expected 4 spaces,
...
PHPCBF CAN FIX THE 10 MARKED SNIFF VIOLATIONS AUTOMATICALLY
Then I run
.../vendor/squizlabs/php_codesniffer/phpcbf --standard=PSR2 myfile.php
Processing myfile.php [PHP => 3001 tokens in 437 lines]... DONE in 77ms (10 fixable violations)
=> Fixing file: 1/10 violations remaining [made 50 passes]... ERROR in 4.43 secs
No fixable errors were found
Time: 4.68 secs; Memory: 26.75Mb
I am running php_codesniffer locally from Laravel vendors folder.
Thanks

Why is my Hello World go server getting crushed by ApacheBench?

package main
import (
"io"
"net/http"
)
func hello(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "Hello world!\n")
}
func main() {
http.HandleFunc("/", hello)
http.ListenAndServe(":8000", nil)
}
I've got a couple of incredibly basic HTTP servers, and all of them are exhibiting this problem.
$ ab -c 1000 -n 10000 http://127.0.0.1:8000/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
apr_socket_recv: Connection refused (61)
Total of 5112 requests completed
With a smaller concurrency value, things still fall over. For me, the issue seems to show up around the 5k-6k mark usually:
$ ab -c 10 -n 10000 http://127.0.0.1:8000/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
apr_socket_recv: Operation timed out (60)
Total of 6277 requests completed
And in fact, you can drop concurrency entirely and the problem still (sometimes) happens:
$ ab -c 1 -n 10000 http://127.0.0.1:8000/
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
apr_socket_recv: Operation timed out (60)
Total of 6278 requests completed
I can't help but wonder if I'm hitting some kind of operating system limit somewhere? How would I tell? And how would I mitigate?
In short, you're running out of ports.
The default ephemeral port range on osx is 49152-65535, which is only 16,383 ports. Since each ab request is http/1.0 (without keepalive in your first examples), each new request takes another port.
As each port is used, it get's put into a queue where it waits for the tcp "Maximum Segment Lifetime", which is configured to be 15 seconds on osx. So if you use more than 16,383 ports in 15 seconds, you're effectively going to get throttled by the OS on further connections. Depending on which process runs out of ports first, you will get connection errors from the server, or hangs from ab.
You can mitigate this by using an http/1.1 capable load generator like wrk, or using the keepalive (-k) option for ab, so that connections are reused based on the tool's concurrency settings.
Now, the server code you're benchmarking does so little, that the load generator is being taxed just as much as the sever itself, with the local os and network stack likely making a good contribution. If you want to benchmark an http server, it's better to do some meaningful work from multiple clients not running on the same machine.

Scaling nginx with static files -- non-Persistent requests kill req/s

Working on a project where we need to server a small static xml file ~40k / s.
All incoming requests are sent to the server from HAProxy. However, none of the requests will be persistent.
The issue is that when benchmarking with non-Persistent requests, the nginx instance caps out at 19 114 req/s. When persistent connections are enabled, performance increases by nearly an order of magnitude, to 168 867 req/s. The results are similar with G-wan.
When benchmarking non-persistent requests, CPU usage is minimal.
What can I do to increase performance with non-persistent connections and nginx?
[root#spare01 lighttpd-weighttp-c24b505]# ./weighttp -n 1000000 -c 100 -t 16 "http://192.168.1.40/feed.txt"
finished in 52 sec, 315 millisec and 603 microsec, 19114 req/s, 5413 kbyte/s
requests: 1000000 total, 1000000 started, 1000000 done, 1000000 succeeded, 0 failed, 0 errored
status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 290000000 bytes total, 231000000 bytes http, 59000000 bytes data
[root#spare01 lighttpd-weighttp-c24b505]# ./weighttp -n 1000000 -c 100 -t 16 -k "http://192.168.1.40/feed.txt"
finished in 5 sec, 921 millisec and 791 microsec, 168867 req/s, 48640 kbyte/s
requests: 1000000 total, 1000000 started, 1000000 done, 1000000 succeeded, 0 failed, 0 errored
status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 294950245 bytes total, 235950245 bytes http, 59000000 bytes data
Your 2 tests are similar (except HTTP Keep-Alives):
./weighttp -n 1000000 -c 100 -t 16 "http://192.168.1.40/feed.txt"
./weighttp -n 1000000 -c 100 -t 16 -k "http://192.168.1.40/feed.txt"
And the one with HTTP Keep-Alives is 10x faster:
finished in 52 sec, 19114 req/s, 5413 kbyte/s
finished in 5 sec, 168867 req/s, 48640 kbyte/s
First, HTTP Keep-Alives (persistant connections) make HTTP requests run faster because:
Without HTTP Keep-Alives, the client must establish a new CONNECTION for EACH request (this is slow because of the TCP handshake).
With HTTP Keep-Alives, the client can send all requests at once (using the SAME CONNECTION). This is faster because there are less things to do.
Second, you say that the static file XML size is "small".
Is "small" nearer to 1 KB or 1 MB? We don't know. But that makes a huge difference in terms of available options to speedup things.
Huge files are usually served through sendfile() because it works in the kernel, freeing the usermode server from the burden of reading from disk and buffering.
Small files can use more flexible options available for application developers in usermode, but here also, file size matters (bytes and kilobytes are different animals).
Third, you are using 16 threads with your test. Are you really enjoying 16 PHYSICAL CPU Cores on BOTH the client and the server machines?
If that's not the case, then you are simply slowing-down the test to the point that you are no longer testing the web servers.
As you see, many factors have an influence on performance. And there are more with OS tuning (the TCP stack options, available file handles, system buffers, etc.).
To get the most of a system, you need to examinate all those parameters, and pick the best for your particular exercise.

Resources