program synthesis: Instead of writing a program directly, it
is often easier to describe the intended behavior of a program in
terms of either formal specification, natural language description, or
input-output examples. Can we then translate these descriptions to
program implementations? My prior work addresses this question in the
context of synthesizing relational queries from input-output examples
by developing the example-guided synthesis paradigm. A good starting
point for this work is
my PhD thesis
or
EGS (PLDI 2021).
I am currently interested in extending example-guided synthesis to
more expressive queries and other language domains as well as the
combination synthesis problem where the goal is to fulfill more than
one formal specification simultaneously.
reasoning in AI: AGI (Artificial General Intelligence)
requires robust reasoning capabilities and cannot solely rely on
language generation methods. While current models excel at pattern
recognition and text generation, they struggle with logical
consistency and factual accuracy over longer contexts. On the other
hand, symbolic reasoning excels at maintaining consistency and
following strict rules, but struggles with the flexibility,
creativity, and natural language understanding that neural networks
provide. To leverage the best of both approaches, this line of work
focuses on embedding formal reasoning tools within Large Language
Models (LLMs). This integration aims to combine the strengths of
data-driven language generation with logical inference, potentially
leading to AI systems that can produce text that is not only fluent
and contextually appropriate, but also logically sound and factually
accurate across extended reasoning chains.
reasoning about AI: While traditional AI development focuses
on functional correctness—ensuring that systems perform as
intended—this project extends to providing guarantees in areas such
as robustness, fairness, and data privacy, particularly in the context
of large language models (LLMs) and broader AI technologies. Robustness
guarantees that the system's output is reliable in the face of
adversarial or unexpected inputs, fairness ensures that the model dees
not perpetuate or amplify biases, and data privacy safeguards sensitive
information and protects user data.
aalok thakkar: research
I have open research positions at Ashoka University (especially for undergraduate and PhD students). Please send me an email if you are interested.
Ashoka University students can find more information about undergraduate projects, capstone projects and thesis, and summer research here.
I am interested in formal methods and logic, and their applications to programming languages and artificial intelligence.