My research interests broadly lie in the fields of Multimodal Learning, Visual Question Answering (VQA), Natural Language Processing (NLP), and Decentralized Learning.
2024
Visual Robustness Benchmark for Visual Question Answering (VQA)
Md Farhan Ishmam*, Ishmam Tashdeed*, Talukder Asir Saadat*, Md Hamjajul Ashmafee, and 2 more authors
@article{ishmam2024visual,title={Visual Robustness Benchmark for Visual Question Answering (VQA)},author={Ishmam, Md Farhan and Tashdeed, Ishmam and Saadat, Talukder Asir and Ashmafee, Md Hamjajul and Kamal, Dr Abu Raihan Mostofa and Hossain, Dr Md Azam},journal={arXiv preprint arXiv:2407.03386},year={2024},url={https://arxiv.org/pdf/2407.03386},}
Enhancing Zero-Shot Crypto Sentiment With Fine-Tuned Language Model and Prompt Engineering
Rahman S. M. Wahidur, Ishmam Tashdeed, Manjit Kaur, and Heung-No Lee
@article{rahman2024enhancing,author={Wahidur, Rahman S. M. and Tashdeed, Ishmam and Kaur, Manjit and Lee, Heung-No},journal={IEEE Access},title={Enhancing Zero-Shot Crypto Sentiment With Fine-Tuned Language Model and Prompt Engineering},year={2024},volume={12},number={},pages={10146-10159},keywords={Cryptocurrency;Social networking (online);Analytical models;Training;Context modeling;Sentiment analysis;Transformers;Zero-shot learning;Context modeling;Supervised learning;Zero-shot learning;in-context learning;supervised fine-tuning;instruction tuned;prompt engineering},doi={10.1109/ACCESS.2024.3350638},url={https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10382518}}