[Bug 2044852] Re: libgcrypt < 1.10.2 returns wrong sha3 hashes for inputs > 4 GiB

Dimitri John Ledkov 2044852 at bugs.launchpad.net
Fri Dec 1 11:56:36 UTC 2023


Uploading to ubuntu (via ftp to upload.ubuntu.com):
  Uploading libgcrypt20_1.9.4-3ubuntu3.1.dsc: done.
  Uploading libgcrypt20_1.9.4-3ubuntu3.1.debian.tar.xz: done.  
  Uploading libgcrypt20_1.9.4-3ubuntu3.1_source.buildinfo: done.
  Uploading libgcrypt20_1.9.4-3ubuntu3.1_source.changes: done.
Successfully uploaded packages.

Thank you

(inline used correct sru version number, and added bug reference)

** Changed in: libgcrypt20 (Ubuntu Jammy)
       Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/2044852

Title:
  libgcrypt < 1.10.2 returns wrong sha3 hashes for inputs > 4 GiB

Status in libgcrypt20 package in Ubuntu:
  Fix Released
Status in libgcrypt20 source package in Jammy:
  In Progress
Status in libgcrypt20 source package in Lunar:
  Fix Released
Status in libgcrypt20 source package in Mantic:
  Fix Released
Status in libgcrypt20 source package in Noble:
  Fix Released

Bug description:
  [ Impact ]

  SHA3 produces wrong results for inputs bigger than 4 GiB

  [ Test Plan ]

  Calculate sha3 hash of a big input file and compare with output of
  another implementation like OpenSSL.

  Expected behavior: same output
  Actual behavior: different output

  Run reproducer attached below (if your machine can afford to allocate 5G RAM
  at once) and see that the patch fixes the assertion error.

  [ Where problems could occur ]

  People relying on the broken hash might be surprised by the new fixed
  result.  The impact is hopefully low since SHA3 from libgcrypt is not
  too widely used, especially not with this input size.

  [ Other Info ]

  From upstream bug report:

  The SHA3 functions give wrong results for inputs larger than 4GB,
  because the originally size_t argument handled as unsigned int in
  keccak_write and leads to integer overflows. This does not happen if
  the data is fed into the md_write by smaller chunks. More information
  and reproducers are available from Clemens in the attached bug.

  The fix that should solve the problem (use of the size_t) is available
  now at gitlab: https://gitlab.com/redhat-crypto/libgcrypt/libgcrypt-
  mirror/-/merge_requests/6 Comments welcomed.

  I was considering updating the some of the hash tests to capture this
  issue, but did not find a simple way to do that yet so I will keep it
  on you to decide if you believe some regression test is needed here.

  Upstream Bug: https://dev.gnupg.org/T6217
  Upstream Fix: https://dev.gnupg.org/rC9c828129b2058c3f36e07634637929a54e8377ee

  [ WARNING ]

  !!! Warning !!!

  hashtest.c reproducer allocates 5GB of RAM, do not run on 32-bit
  architectures.

  Do not run if you don't have that much RAM free, as it will likely
  trigger OOM and may kill your machine.

  !!! Warning !!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libgcrypt20/+bug/2044852/+subscriptions




More information about the Ubuntu-sponsors mailing list