First time here? Check out the FAQ!
THIS IS A TEST INSTANCE. Feel free to ask and answer questions, but take care to avoid triggering too many notifications.
0

TCP flow out of sync

  • retag add tags

In attached screenshot of a TCP flow capture, the acknowledgment and sequence numbers are out of sync. IP address starting with 83 is the external address of the web client. Ip address starting with 194 is the external address of the web server.

I.m.o the server is sending a sequence 4381 in packet 27 that is incorrectly numbered. As the previous -captured- packet from server to client has SEQ 1 with a Length of 0. A previous packet (or multiple, counting up to SEQ 4380) with SEQ 1 and length 4380 is missing from server to client. Wireshark notices this descripancy and says a previous segment is obviously missing. Do you agree with this analysis ?

2018_10_11_pppoe1_interface_pcap_Copy

anonymous user
asked 2018-10-11 14:39:18 +0000
edit flag offensive 0 remove flag close merge delete

Comments

add a comment see more comments

2 Answers

0

Though the math is difficult to follow given that the tcplen values can only be inferred from the seq# sent by the 194. endpoint (and those are not displayed in those segments labeled with "Continuation"), it looks like heavy packet loss (on the order of 3*mss somewhere between frames 24 and 27). As a suggestion (only) you may want to modify the 'len' column to display the tcplen value (or add that column). Another suggestion is to disable relative sequence numbering. It makes the numbers bigger but they eventually becomes easier to follow, at a glance.

That being said, there are a few odd items of note with your screen capture.

One 'odd" thing that seems noteworthy is that there is ~1.5+sec between segment 24 and 27. The 194. sent 3*mss worth of segments to the 83. endpoint, I would expect that the local retx timer would've expired for N number of those segments. It's unusual for a local retx timer to be set on the order of seconds (linux default for first retx is 200ms - IIRC), but without knowing the configuration of these endpoints I suppose it could be configured at higher (possibly non-default) values.

Regardless .. there is no indication that proper recovery is occurring on the 194. endpoint. We see that the ACK number from the 83. endpoint continues to be '1,' as if there are NO segments seen from the 194. endpoint.

That being said .. there MAY be SACK option information populated in the segments sent from the 83. endpoint, but those are not visible in your output. (I would expect to see truncated information at the far right of the "Info" field if SLE<>SRE info was contained in the segments from the 83. endpoint).

Malinomadon's avatar
1
Malinomadon
answered 2018-10-11 15:54:02 +0000
edit flag offensive 0 remove flag delete link

Comments

New screenshot with LEN values for all packets.

Latency-capture-absolute-seq-pic1

--JayJay--'s avatar --JayJay-- (2018-10-11 17:30:29 +0000) edit

As a hint - tracking TCP conversation behavior in a screen shot isn't fun. Maybe you can share a sanitized pcap file instead? Check this tutorial on how to do that: https://blog.packet-foo.com/2016/11/t...

Jasper's avatar Jasper (2018-10-12 13:20:39 +0000) edit

did not know this was available. Will consider it for sure.

--JayJay--'s avatar --JayJay-- (2018-10-12 13:48:17 +0000) edit
add a comment see more comments
0

Keeping in mind you're using PPPoE and exactly 3 fully-loaded 1460-Bytes segments were lost while smaller ones passed through I'd say this looks like a pattern of MTU mismatch / PMTU discovery problem / ICMP blackhole. Try to tune PPPoE router or client for using of smaller MSS.

Packet_vlad's avatar
1.1k
Packet_vlad
answered 2018-10-11 18:05:54 +0000
edit flag offensive 0 remove flag delete link

Comments

I checked whether traffic was dropped due to exceeding MTU size . This is indeed the case. Max. MTU/MSS change is going to be configured on the router.

--JayJay--'s avatar --JayJay-- (2018-10-12 13:28:35 +0000) edit
1

MSS clamping is configured on the gateway now, which rewrites the TCP option for maximum segment size limiting it to 1452 bytes (MTU 1492). LAN hosts, be it Windows, Android, macOs, iOS , Linux or whaterver, configured to run default MTU of 1500 can now succesfully access internet resources without reconfiguration, as the gateway lowers the MSS in the TCP handshake SYN packet.

--JayJay--'s avatar --JayJay-- (2018-10-28 17:28:38 +0000) edit
add a comment see more comments

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account. This space is reserved only for answers. If you would like to engage in a discussion, please instead post a comment under the question or an answer that you would like to discuss.

Add Answer