Prof. Shahram Latifi,
IEEE Fellow
University of Nevada,
USA
Biography:
Shahram Latifi, received the Master of
Science and the PhD degrees both in
Electrical and Computer Engineering from
Louisiana State University, Baton Rouge, in
1986 and 1989, respectively. He is currently
a Professor of Electrical Engineering at the
University of Nevada, Las Vegas. Dr. Latifi
is the co-director of the Center for
Information Technology and Algorithms (CITA)
at UNLV. He has designed and taught
undergraduate and graduate courses in the
broad spectrum of Computer Science and
Engineering in the past four decades. He has
given keynotes and seminars on machine
learning/AI and IT-related topics all over
the world. He has authored over 300
technical articles in the areas of
networking, AI/ML, cybersecurity, image
processing, biometrics, fault tolerant
computing, parallel processing, and data
compression. His research has been funded by
NSF, NASA, DOE, DoD, Boeing, and Lockheed.
Dr. Latifi was an Associate Editor of the
IEEE Transactions on Computers (1999-2006),
an IEEE Distinguished Speaker (1997-2000),
Co-founder and Chair of the IEEE Int'l Conf.
on Information Technology (2000-2004) and
founder and Chair of the International Conf.
on Information Technology-New Generations
(2005-Present) . Dr. Latifi is the recipient
of several research awards, the most recent
being the Barrick Distinguished Research
Award (2021). Dr. Latifi was recognized to
be among the top 2% researchers around the
world in December 2020, according to
Stanford top 2% list (publication data in
Scopus, Mendeley). He is an IEEE Fellow
(2002) and a Registered Professional
Engineer in the State of Nevada.
Speech Title: AI Advancements
and Challenges: Navigating the Future of
Responsible AI
Over the past two decades, AI technology has
advanced at an astonishing pace.
Breakthroughs such as Deep Learning,
Generative Adversarial Networks, Transfer
Learning, and Large Language Models have
accelerated this progress, enabling AI to
revolutionize various aspects of society. AI
has significantly enhanced the performance
of systems in fields like education,
healthcare, aerospace, manufacturing,
security, e-commerce, and art. However,
alongside these tremendous benefits come
major concerns about the potential threats
AI poses to humanity. How can we ensure our
training data is unbiased and well-balanced?
How can we guarantee that AI systems respect
individual privacy? And most importantly,
how can we ensure these systems remain
controllable and act responsibly?
In this talk, I will provide a brief
overview of AI, Machine Learning (ML), and
Deep Learning (DL). While there are
significant challenges in achieving
general-purpose AI (as opposed to Narrow
AI), there are even greater issues that must
be addressed to ensure AI is safe, fair, and
secure. I will also discuss recent efforts
in the United States and around the world to
build responsible AI.
Prof. Dr. Ho-Jin Choi
Korea Advanced Institute of Science & Technology (KAIST),
South Korea
Biography:
Prof. Dr. Ho-Jin Choi is a professor in the
School of Computing at Korea Advanced
Institute of Science and Technology (KAIST),
Daejeon, Korea. He received a BS in computer
engineering from Seoul National University
(SNU), Korea, an MSc in computing software
and systems design from Newcastle
University, UK, and a PhD in artificial
intelligence from Imperial College London,
UK. During 1980’s he worked for DACOM Corp.,
Korea, in later 1990’s he joined with Korea
Aerospace University, before he moved to
KAIST in 2009. In early 2000’s, he visited
Carnegie Mellon University (CMU), USA, and
served as an adjunct faculty for the Master
of Software Engineering (MSE) program
operated jointly by CMU and KAIST for 10
years. In 2010’s he participated research in
Systems Biomedical Informatics Research
Center at the College of Medicine, SNU,
worked with Samsung Electronics on big data
intelligence solutions, and with UAE’s
Khalifa University on intelligent
multi-sensor healthcare surveillance. He
also participated in a Korean national
project called Exobrain for natural language
question/answering. Since 2018, he has been
the director of Smart Energy Artificial
Intelligence Research Center, and since 2020
the director of Center for Artificial
Intelligence Research, both at KAIST. His
current research interests include natural
language processing, machine learning,
explainable AI, and smart energy.
Speech Title: DialogCC for Creating
High-Quality Multi-Modal Dialogue Datasets
For sharing images in instant messaging,
active research has been going on learning
image-text multi-modal dialogue models.
Training a well-generalized multi-modal
dialogue model remains challenging due to
the low quality and limited diversity of
images per dialogue in existing multi-modal
dialogue datasets. In this research, we
propose an automated pipeline to construct a
multi-modal dialogue dataset, ensuring both
dialogue quality and image diversity without
requiring any human effort. In order to
guarantee the coherence between images and
dialogue, we prompt GPT-4 to infer potential
image-sharing moments, e.g., utterance,
speaker, rationale, and image description.
Furthermore, we leverage CLIP similarity to
maintain consistency between aligned
multiple images to the utterance. Using this
pipeline, we introduce DialogCC, a
high-quality and diverse multi-modal
dialogue dataset that surpasses existing
approaches in terms of quality and diversity
in human evaluation. Our experiments
highlight multi-modal dialogue models
trained using our dataset, and their
generalization performance on unseen
dialogue datasets.