Mert Inan
Mert Inan
Northeastern University
Verified email at - Homepage
Cited by
Cited by
Modeling Intensification for Sign Language Generation: A Computational Approach
M İnan, Y Zhong, S Hassan, L Quandt, M Alikhani
arXiv preprint arXiv:2203.09679, 2022
Including Facial Expressions in Contextual Embeddings for Sign Language Generation
C Viegas, M İnan, L Quandt, M Alikhani
arXiv preprint arXiv:2202.05383, 2022
Findings of the Second WMT Shared Task on Sign Language Translation (WMT-SLT23)
M Müller, M Alikhani, E Avramidis, R Bowden, A Braffort, N Cihan Camgöz, ...
Association for Computational Linguistics, 2023
COSMic: A Coherence-Aware Generation Metric for Image Descriptions
M İnan, P Sharma, B Khalid, R Soricut, M Stone, M Alikhani
arXiv preprint arXiv:2109.05281, 2021
Structurally-informed deconvolution of functional magnetic resonance imaging data
TAW Bolton, Y Farouj, M Inan, D Van De Ville
2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019 …, 2019
Zero-shot Cross-Linguistic Learning of Event Semantics
M Alikhani, T Kober, B Alhafni, Y Chen, M Inan, E Nielsen, S Raji, ...
arXiv preprint arXiv:2207.02356, 2022
Multimodal Embodied Plan Prediction Augmented with Synthetic Embodied Dialogue
A Padmakumar, M Inan, S Gella, PL Lange, D Hakkani-Tur
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
Learning to Generate Context-Sensitive Backchannel Smiles for Embodied AI Agents with Applications in Mental Health Dialogues
M Bilalpur, M Inan, D Zeinali, JF Cohn, M Alikhani
arXiv preprint arXiv:2402.08837, 2024
ISABEL: An Inclusive and Collaborative Task-Oriented Dialogue System
A Sicilia, Y Asano, K Atwell, Q Cheng, D Gupta, S Hassan, M Inan, ...
Generating Signed Language Instructions in Large-Scale Dialogue Systems
M Inan, K Atwell, A Sicilia, L Quandt, M Alikhani
Proceedings of the 2024 Conference of the North American Chapter of the …, 2024
Combining Discourse Coherence with Large Language Models for More Inclusive, Equitable, and Robust Task-Oriented Dialogue
K Atwell, M Inan, AB Sicilia, M Alikhani
Proceedings of the 2024 Joint International Conference on Computational …, 2024
Seeing Eye-to-Eye: Cross-Modal Coherence Relations Inform Eye-gaze Patterns During Comprehension & Production
M Inan, M Alikhani
Proceedings of the 2024 Joint International Conference on Computational …, 2024
Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community
C Kennington, M Alikhani, H Pon-Barry, K Atwell, Y Bisk, D Fried, ...
arXiv preprint arXiv:2404.01158, 2024
Proceedings of the 3rd Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2023)
A Padmakumar, M Inan, Y Fan, X Wang, M Alikhani
Proceedings of the 3rd Combined Workshop on Spatial Language Understanding …, 2023
Learning Multimodal Cues of Children’s Uncertainty
Q Cheng, M Inan, R Mbarki, G Grmek, T Choi, Y Sun, K Persaud, J Wang, ...
Proceedings of the 24th Annual Meeting of the Special Interest Group on …, 2023
Multimodal Contextualized Plan Prediction for Embodied Task Completion
M Inan, A Padmakumar, S Gella, P Lange, D Hakkani-Tur
arXiv preprint arXiv:2305.06485, 2023
Grounding Novel Utterances in Visual Dialogue
M Inan, M Alikhani
Learning cognitive and linguistic prosodic categories for automatic cross-lingual sign language understanding
M Inan, S Hassan, LC Quandt, M Alikhani
Proceedings of the Annual Meeting of the Cognitive Science Society 44 (44), 2022
Living Indicators of Sea Pollution: Mytilus galloprovincialis
M İnan
İşçilerin gözünden 1960 sonrası Zonguldak maden ocakları
G Sayın, M İnan, İ Gökalp, O Gülhat, G Varlı
Bilkent University, 0
The system can't perform the operation now. Try again later.
Articles 1–20