First time here? Check out the FAQ!
THIS IS A TEST INSTANCE. Feel free to ask and answer questions, but take care to avoid triggering too many notifications.
0

Segmentation fault (core dumped)

  • retag add tags

Good afternoon! Faced a problem, when opening a file over 17 gb, I get a core dump error The problem repeats on Windows and Centos Wireshark versions are newest everywhere! "Segmentation fault (core dumped)" g_ascii_strcasecmp: assertion 's1 != NULL' failed Segmentation fault (core dumped) I have 660 GB RAM and 72 core for this task

Anatoliy.Kransov's avatar
1
Anatoliy.Kransov
asked 2020-11-24 11:02:54 +0000, updated 2020-11-24 14:28:26 +0000
edit flag offensive 0 remove flag close merge delete

Comments

Have you tried splitting the capture? Note that dissection expands on the original capture size as packets are decompressed, decrypted and associations and references between packets are built.

grahamb's avatar grahamb (2020-11-24 15:06:27 +0000) edit
add a comment see more comments

3 Answers

0

Unless your providing terabytes of RAM this dissection won't fit. You can chop up the capture file in sizeable parts, of let's say 500MiB, using editcap and see what happens then.

Jaap's avatar
13.7k
Jaap
answered 2020-11-24 11:20:43 +0000
edit flag offensive 0 remove flag delete link

Comments

i have 660 GB ram it's small ?

Anatoliy.Kransov's avatar Anatoliy.Kransov (2020-11-24 14:14:31 +0000) edit

No, that's not small. That's an amount I've never seen used for Wireshark before. So we're venturing in uncharted territory, where new bugs may appear that never have been hit before. A detailed issue report including stack trace would be the least that's needed to get an investigation started into possibly buggy code. And then testing of any solutions would be virtually impossible due to limited resources available to the average developer.

Jaap's avatar Jaap (2020-11-24 16:41:32 +0000) edit
add a comment see more comments
0

You've probably run out of memory, see the wiki page for this with some suggestions on how to tackle it.

For your case, splitting the capture into smaller files is likely the solution.

grahamb's avatar
23.8k
grahamb
answered 2020-11-24 11:18:00 +0000
edit flag offensive 0 remove flag delete link

Comments

"free -h" shows 88 Gbyte "free" memory. Not enough for opening 17 GB pcap file?

Igor Ivanov's avatar Igor Ivanov (2020-11-24 16:18:30 +0000) edit

Apparently not. Try splitting the file.

grahamb's avatar grahamb (2020-11-24 16:27:20 +0000) edit

Then how much memory will be enough? Or what file size is maximum to open in wireshark?

Splitting is just workaround.

Igor Ivanov's avatar Igor Ivanov (2020-11-24 16:39:28 +0000) edit

There's no definitive answer, it depends on the protocols in the capture and whether there is keying material to decrypt etc. and various other dissector options.

grahamb's avatar grahamb (2020-11-24 16:52:29 +0000) edit

Maybe a better question is what are you trying to do with a 17GB capture? Surely you aren't visually inspecting it, so what result do you want?

grahamb's avatar grahamb (2020-11-24 16:54:53 +0000) edit
add a comment see more comments
0

"Segmentation fault (core dumped)" g_ascii_strcasecmp: assertion 's1 != NULL'

That does not appear, at first glance, to be one of the sort of crashes that would occur from running out of memory. Most of the memory allocation in the Wireshark code is done with GLib's memory allocation wrappers, which, by default, crash on failure, rather than returning NULL, so those would be crashes in the memory allocator, not g_ascii_strcasecmp() crashes.

This is probably some other routine that can return NULL, such as a lookup routine, where the caller isn't checking for an error.

The fact that it happened with a very large file may just be due to a large file having more packets and thus being more likely to have a packet in which something that "can't happen" or "shouldn't happen" nevertheless happens.

Unfortunately, there are several calls to g_ascii_strcasecm() in Wireshark (many text-based protocols have case-insensitive keywords, and case-insensitive comparisons must be done in a locale-independent fashion, to, for example, avoid having "I" and "i" being treated as different), so, as @Jaap said, "A detailed issue report including stack trace would be the least that's needed to get an investigation started into possibly buggy code."

Guy Harris's avatar
19.9k
Guy Harris
answered 2020-11-24 20:10:15 +0000
edit flag offensive 0 remove flag delete link

Comments

Thanks Guy! The best answer here. Ok, I've opened an issue on this failure.

Igor Ivanov's avatar Igor Ivanov (2020-11-25 09:06:50 +0000) edit
add a comment see more comments

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account. This space is reserved only for answers. If you would like to engage in a discussion, please instead post a comment under the question or an answer that you would like to discuss.

Add Answer