A sarcasm-aware ECPE framework using DistilBERT and joint multi-task learning to identify emotions and their causes in conversational text — with 40% fewer parameters than BERT-base.
Each input document flows through structured preprocessing, sarcasm detection, clause segmentation, contextual encoding, and joint prediction.
Evaluated on a unified Master ECPE Corpus (DailyDialog + RECCON) with sarcasm-sensitive clause-level annotations.
| Metric | Proposed (DistilBERT + MTL) | Baseline (BERT-base) |
|---|---|---|
| ECPE F1-Score | 0.84 Better | 0.82 |
| Sarcasm Detection AUC | 0.9138 New Module | — (not supported) |
| Model Parameters | ~66M Lighter | ~110M Heavier |
| Transformer Layers | 6 Efficient | 12 |
| Inference Speed | Faster Better | Slower |
| Sarcasm-Aware Training | Yes Robust | No |
A shared DistilBERT encoder feeds two parallel classification heads for simultaneous emotion and cause prediction.
A unified clause-level corpus combining DailyDialog and RECCON with sarcasm-sensitive annotations and distillation-enhanced supervision.
Turn-taking conversational dataset with six emotion labels. Each utterance is a realistic daily conversation — ideal for training models on natural, informal language patterns.
Richly annotated multimodal dataset of expressive emotional conversations. Provides fine-grained emotional labels and strong coverage of non-neutral emotional expressions.
Emotion–cause pairs extracted by the proposed model from test set sentences, with confidence scores.
Department of Computer Science and Engineering, Ramco Institute of Technology, Rajapalayam, India.