[Bug 1903817] Comment bridged from LTC Bugzilla

bugproxy 1903817 at bugs.launchpad.net
Fri Nov 20 15:40:37 UTC 2020


------- Comment From Stefan.Schulze.Frielinghaus at ibm.com 2020-11-20 10:37 EDT-------
The problem boils down to a buffer overflow.  Consider the following code snippet from file pair_dist.c

int iChunkStarts[iNumberOfThreads];
int iChunkEnds[iNumberOfThreads];

// ...

for(iChunk = 0; iChunk <= iNumberOfThreads; iChunk++)
{
iChunkEnd = iChunkStart;
if (iChunk == iNumberOfThreads - 1){
iChunkStart = 0;
}
else if (iend == jend){
iChunkStart = iend - ((double)(iend - istart) * sqrt(((double)iChunk + 1.0)/(double)iNumberOfThreads));
}
else {
iChunkStart = iend - (iend - istart) * (iChunk + 1) / (double)(iNumberOfThreads);
}
iChunkStarts[iChunk] = iChunkStart;
iChunkEnds[iChunk] = iChunkEnd;
/*printf("%s:%d: C=%d, ie=%d, is=%d, je=%d, js=%d, Cstart=%d, Cend=%d, diff=%d\n",
__FILE__, __LINE__, iChunk, iend, istart, jend, jstart, iChunkStart, iChunkEnd, iChunkEnd-iChunkStart);*/
}

if (PAIRDIST_KTUPLE == pairdist_type) {

Log(&rLog, LOG_INFO, "Calculating pairwise ktuple-distances...");

NewProgress(&prProgress, LogGetFP(&rLog, LOG_INFO),
"Ktuple-distance calculation progress", bPrintCR);
#ifdef HAVE_OPENMP
#pragma omp parallel for private(iChunk) schedule(dynamic)
#endif
for(iChunk = 0; iChunk < iNumberOfThreads; iChunk++)
{
KTuplePairDist((*distmat), mseq, iChunkStarts[iChunk],
iChunkEnds[iChunk], jstart, jend, NULL, prProgress,
&ulStepNo, ulTotalStepNo);
}

The first loop writes behind VLAs iChunkStarts and iChunkEnds.
The closure for the upcoming OMP loop

.omp_data_o.21.iChunkEnds.3 = iChunkEnds.3_72;
.omp_data_o.21.iChunkStarts.1 = iChunkStarts.1_70;
.omp_data_o.21.ulTotalStepNo = ulTotalStepNo_81;
.omp_data_o.21.jend = jend_77(D);
.omp_data_o.21.jstart = jstart_91(D);
.omp_data_o.21.mseq = mseq_93(D);
.omp_data_o.21.distmat = distmat_78(D);
.omp_data_o.21.ulStepNo = &ulStepNo;
.omp_data_o.21.prProgress = &prProgress;
__builtin_GOMP_parallel (PairDistances._omp_fn.0, &.omp_data_o.21, 0, 0);

is placed adjacent to those VLAs iChunkStarts and iChunkEnds on S/390.
The overflow results in overwriting the high part of

.omp_data_o.21.iChunkEnds.3

which finally results in a SEGVAULT once the corresponding OMP function
is executed and the pointer is dereferenced.

Long story short:

diff --git a/src/clustal/pair_dist.c.orig b/src/clustal/pair_dist.c
index e6dbdc3..bb79e61 100644
--- a/src/clustal/pair_dist.c.orig
+++ b/src/clustal/pair_dist.c
@@ -321,7 +321,7 @@ PairDistances(symmatrix_t **distmat, mseq_t *mseq, int pairdist_type, bool bPerc

/* FIXME: can get rid of iChunkStart, iChunkEnd now that we're using the arrays */
iChunkStart = iend;
-        for(iChunk = 0; iChunk <= iNumberOfThreads; iChunk++)
+        for(iChunk = 0; iChunk < iNumberOfThreads; iChunk++)
{
iChunkEnd = iChunkStart;
if (iChunk == iNumberOfThreads - 1){

should fix this.

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to gcc-10 in Ubuntu.
https://bugs.launchpad.net/bugs/1903817

Title:
  Clustalo 1.2.4-6 segfaults on s390x

Status in Ubuntu on IBM z Systems:
  New
Status in clustalo package in Ubuntu:
  New
Status in gcc-10 package in Ubuntu:
  New

Bug description:
  Hi,
  with gcc-10.2 clustalo segfaults on s390x.

  First of all I beg your pardon, but I didn't find an upstream  bug tracker for custalo but
  think you should be aware. But furthermore I think this might eventually be a gcc bug (or at least needs the s390x gcc experts to look at).
  I decided to open this bug to track things and have a joint conversation, but then ping the custalo mail about it and let it be mirrored to IBM.

  Issue:
  I see this with the test used in Debian:
    # Run additional test from python-biopython package to verify that
    # this will work as well
    src/clustalo -i debian/tests/biopython_testdata/f002 --guidetree-out temp_test.dnd -o temp_test.aln --outfmt clustal --force

  We run into this segfault:
  Thread 9 "clustalo" received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0x3fff9ef8870 (LWP 55818)]
  0x000002aa000176e2 in PairDistances._omp_fn.0 () at pair_dist.c:353
  353	                KTuplePairDist((*distmat), mseq, iChunkStarts[iChunk], 
  (gdb) bt
  #0  0x000002aa000176e2 in PairDistances._omp_fn.0 () at pair_dist.c:353
  #1  0x000003fffdaa2066 in gomp_thread_start (xdata=<optimized out>) at ../../../src/libgomp/team.c:123
  #2  0x000003fffd709556 in start_thread (arg=0x3fff9ef8870) at pthread_create.c:463
  #3  0x000003fffd921d46 in thread_start () at ../sysdeps/unix/sysv/linux/s390/s390-64/clone.S:65

  Debugging showed that this is depending on the optimization, when I build
  with -O0 (for debugging) the problem goes away.

  A usual build uses -O3 (from the build system) followed by -g -O2 (from the
  default Debian build flags). For the time being we can avoid the issue by
  setting -O0 there. But I wanted to ask if this is something you could look into?

  In valgrind I see this reported as "Invalid read of size 4"

  In the backtrace it is:
  gdb) p $_siginfo
  $3 = {si_signo = 11, si_errno = 0, si_code = 1, _sifields = {_pad = {0, -16384, 0 <repeats 26 times>}, _kill = {si_pid = 0, si_uid = 4294950912}, _timer = {si_tid = 0, si_overrun = -16384,
        si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 0, si_uid = 4294950912, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 0, si_uid = 4294950912,
        si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {si_addr = 0xffffc000}, _sigpoll = {si_band = 4294950912, si_fd = 0}}}

  The instructions are
  │   0x2aa000176d6 <PairDistances._omp_fn.0+246>     lg      %r2,40(%r9)                                                                                                                       │
  │   0x2aa000176dc <PairDistances._omp_fn.0+252>     sllg    %r1,%r10,2                                                                                                                        │
  │  >0x2aa000176e2 <PairDistances._omp_fn.0+258>     lgf     %r5,0(%r1,%r3)                                                                                                                    │
  │   0x2aa000176e8 <PairDistances._omp_fn.0+264>     lgf     %r4,0(%r1,%r8)

  So it tries to load from
  r3             0xffffcf80          4294954880
  + r1             0x24                36
  into r5

  And that matches the segfault address of si_addr = 0xffffc000

  @IBM
  to reproduce:
  1. get an Ubuntu 20.10 system on s390x (or anything with gcc-10.2 while OTOH it seems gcc-10 was fine).
  2. edit /etc/apt/sources.list
    2a) add deb-src lines to be able to get the source
    2b) enable proposed to be able to get custalo 1.2.4-6
  3. run the build
    $ ./debian/rules build
  This will end in the crash that is to debug.

  @Custalo people:
  If you need s390x system access please check out the IBM Community cloud [1][2]
  which should give you a free VM.

  [1]: https://developer.ibm.com/components/ibm-linuxone/gettingstarted/?_ga=2.85909726.636290536.1605082467-259352313.1597225455
  [2]: https://zcloud.marist.edu/#/login

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1903817/+subscriptions



More information about the foundations-bugs mailing list