[month] [year]

ISIT-2023

Faculty and students have presented their research work at  IEEE International Symposium on Information Theory held at Taipei, Taiwan from 25 to 30 June:

  • t-PIR Schemes with Flexible Parameters via Star Products of Berman Codes – Dr. Prasad Krishnan and his students Srikar Kale, MS by research and Keshav Agarwal, 4th year-B.Tech-ECE 

Research work as explained by the authors:

We present a new class of private information retrieval (PIR) schemes that keep the identity of the file requested private in the presence of most t-colluding servers, based on the recent framework developed for such t-PIR schemes using star products of transitive codes. These t-PIR schemes employ the class of Berman codes as the storage-retrieval code pairs. Berman codes, which are binary linear codes of length nm for any n ≥ 2 and m ≥ 1 being positive integers, were recently shown to achieve the capacity of the binary erasure channel. We provide a complete characterization of the star products of the Berman code pairs, enabling us to calculate the PIR rate of the star product-based schemes that employ these codes. The schemes we present have flexibility in the number of servers, the PIR rate, the storage rate, and the collusion parameter t, owing to numerous codes available in the class of Berman code.

ISIT is the premier international conference series dedicated to the advancement of information theory and related areas. It brings together an international community of researchers and practitioners each year to present and discuss new research results and perspectives on future developments relevant to all areas of information theory, including big data analytics, source and channel coding, communication theory and systems, cryptography and security, detection and estimation, emerging applications, networks, network coding, signal processing, and statistical/machine learning.

 

  • Towards Addressing GAN Training Instabilities: Dual Objective GANs with Tunable Parameters – Gowtham R Kurri; (Monica Welfert; Kyle Otstot and Lalitha Sankar from Arizona State University)

 

Research work as explained by the authors:

In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using alpha-loss, a tunable classification loss, to obtain (alpha_D,alpha_G)-GANs, parameterized by (alpha_D,alpha_G). For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (alpha_D,alpha_G). In the finite sample and capacity setting, we define estimation error to quantify the gap in the generator’s performance relative to the optimal setting with infinite samples and obtain upper bounds on this error, showing it to be order optimal under certain conditions. Finally, we highlight the value of tuning (alpha_D,alpha_G) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring and the Stacked MNIST datasets.

Dr. Gowtham R Kurri served as Session Chair for the Technical Session on Machine Learning 1 in this conference.

Hosted by the IEEE Information Theory Society, ISIT features contributed papers, the Shannon lecture and plenary talks, as well as invited and tutorial sessions. 

Conference page:  https://isit2023.org/

July 2023

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •