Vasileios Mezaris
Electrical and Computer Engineer, Ph.D.
homepage curriculum vitae projects research demos downloads publications contact m r d b c p

Dr. Vasileios Mezaris is a Research Director (Senior Researcher Grade A) with the Information Technologies Institute / Centre for Research and Technology Hellas, Thessaloniki, Greece. He is the Head of the Intelligent Digital Transformation Laboratory, where he leads a group of researchers working on multimedia understanding and artificial intelligence; in particular, on image and video analysis and annotation, machine learning and deep learning for multimedia understanding and big data analytics, explainable AI and green AI, multimedia indexing and retrieval, and applications of multimedia understanding and artificial intelligence in specific domains (including TV broadcasting and news, education and culture, medical / ecological / business data analysis, security applications).

Dr. Mezaris has co-authored more than 40 papers in refereed journals, 20 book chapters, 200 papers in international conferences, and 3 patents. He has edited two books and several proceedings volumes; he serves as Senior Area Editor for the IEEE Signal Processing Letters (2020-present) and as Editorial Board Member for the International Journal of Multimedia Information Retrieval (2024-present); served as Associate Editor for the IEEE Signal Processing Letters (2016-2020) and the IEEE Transactions on Multimedia (2012-2015 and 2018-2022); and serves regularly as a reviewer for many international journals and conferences. He has participated in many research projects, and as the Coordinator in EC H2020 projects InVID and MOVING. He serves as Chairman of the Scientific Council of the Information Technologies Institute (2019-present) and as Member / Representative of CERTH in the General Assembly of the Hellenic Foundation for Research and Innovation (HFRI), a National research-funding agency (Oct. 2023-present). He is a Senior Member of the IEEE.

NEWS
(Mar. 2024) M. Ntrougkas, N. Gkalelis, V. Mezaris, "T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers", arXiv:2403.04523, DOI:10.48550/arXiv.2403.04523.
(Mar. 2024) Paper submission deadline extended to April 3, 2024: 1st Int. Workshop on Video for Immersive Experiences (Video4IMX-2024) at ACM IMX 2024, June 2024. Accepted workshop papers will be published in ACM ICPS and will be available in the ACM Digital Library.
(Feb. 2024) Call for Papers, Special Session on Multimedia Indexing for XR (MmIXR) at CBMI 2024, Sept. 18-20 in Reykjavik, Iceland. Paper submission deadline: March 22, 2024 extended to April 12, 2024.
(Feb. 2024) Our participation in the NewsImages Task of the MediaEval 2023 evaluation benchmark won first place! Details of our approach in our paper and slides.
(Jan. 2024) Call for Papers, 1st Int. Workshop on Video for Immersive Experiences (Video4IMX-2024) at ACM IMX 2024, June 12-14 in Stockholm, Sweden. Accepted workshop papers will be published in ACM ICPS and will be available in the ACM Digital Library. Paper submission deadline: March 17, 2024.
(Sept. 2023) Organized a breakout session on "AI explainability for vision tasks", and also gave a presentation as invited expert in a breakout session on "Human-Aligned Video AI", at the Theme Development Workshop "Trusted AI - The future of creating ethical and responsible AI systems" organized by the EU's AI NoEs. My presentation slides here (session 1) and here (session 12).
(Sept. 2023) The Verification Plugin by InVID & WeVerify, now further developed within our vera.ai project, has reached 100,000+ users in chrome!
(Aug. 2023) Call for Papers, 30th Int. Conf. on Multimedia Modeling (MMM 2024), Jan.-Feb. 2024, Amsterdam, The Netherlands: https://mmm2024.org/submit.html. Regular and Special Session Paper submission deadline: Sept. 4, 2023.
(June 2023) The Proceedings of the 2023 ACM International Conference on Multimedia Retrieval (ICMR 2023), held in Thessaloniki, Greece, have been published in the ACM Digital Library: ICMR 2023 Proceedings. The conference program is posted on https://icmr2023.org/.
(Feb. 2023) Our work on "TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks", presented at IEEE ISM 2022, was featured as the research of the month on RSIP Vision's Computer Vision News magazine. Read the full interview.
(Jan. 2023) Our participation in the NewsImages Task of the MediaEval 2022 evaluation benchmark won first place! Find out more about our approach in the paper and slides.
(Dec. 2022) Our IEEE ISM 2022 paper titled "TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks" received the ISM 2022 Best Paper Award. The paper's presentation slides and software are available on slideshare and github, respectively.
(Dec. 2022) Call for Papers, ACM ICMR 2023, Thessaloniki, Greece: https://icmr2023.org/call-for-papers/. Paper submission deadline: Jan. 31, 2023.
(Nov. 2022) Talk on "Explaining the decisions of image/video classifiers" at the 1st Nice Workshop on Interpretability, Nice, France, Nov. 2022. slides
(Oct. 2022) Why did my video event recognition method miscategorize this video as "Making lemonade"? Find out why in "ViGAT: Bottom-up event recognition and explanation in video using factorized graph attention network", by N. Gkalelis, D. Daskalakis and V. Mezaris, IEEE Access, vol. 10, pp. 108797-108816, 2022. DOI:10.1109/ACCESS.2022.3213652. Software available at https://github.com/bmezaris/ViGAT.
(July 2022) Two new Horizon Europe research projects, on developing Artificial Intelligence (AI) and eXtended Reality (XR) technologies, have been approved and will kick off in Fall 2022.
(Feb. 2022) Call for Technical Demonstrations, ACM Int. Conf. on Multimedia Retrieval (ICMR), Newark, NJ, USA, June 27-30, 2022. Technical demo paper submission deadline: February 24 extended to March 20, 2022.
(Feb. 2022) Call for Tutorial Proposals, ACM Multimedia (ACMMM), Lisbon, Portugal, Oct. 10-14, 2022. Tutorial proposal submission deadline: March 1, 2022.
(Nov. 2021) Our survey paper, "Video Summarization Using Deep Neural Networks: A Survey", by E. Apostolidis, E. Adamantidou, A. Metsai, V. Mezaris, I. Patras, has been published in the Proceedings of the IEEE journal (Impact Factor: 10.96), vol. 109, no. 11, pp. 1838-1863, Nov. 2021. The published paper is available at DOI:10.1109/JPROC.2021.3117472 and the accepted version is also available at https://arxiv.org/abs/2101.06072.
(Sept. 2021) A new and exciting research project just started: the H2020 RIA "CRiTERIA: Comprehensive Data-driven Risk and Threat Assessment Methods for the Early and Reliable Identification, Validation and Analysis of migration-related Risks".
(Sept. 2021) The Verification Plugin by InVID & WeVerify has reached 50,000+ users in chrome!
(May 2021) Call for Papers, Special Issue on "Data-driven Personalisation of Television Content", Multimedia Systems Journal (Springer). Paper submission deadline: September 15, 2021.
(Feb. 2021) Call for Papers, DataTV 2021: 2nd Int. Workshop on Data-driven Personalisation of Television at the ACM Int. Conf. on Interactive Media Experiences (IMX 2021), June 21-23, 2021 (to be held as a virtual event). Paper submission deadline: 29 March 2021.
(Feb. 2021) The Verification Plugin by InVID & WeVerify has reached 40,000+ users in chrome!
(Jan. 2021) E. Apostolidis, E. Adamantidou, A. Metsai, V. Mezaris, I. Patras, "Video Summarization Using Deep Neural Networks: A Survey", arXiv:2101.06072, https://arxiv.org/abs/2101.06072.
(Jan. 2021) The proceedings of the MMM 2021 conference are now available as Springer LNCS vol. 12572 (MMM 2021 Proceedings Part I) and LNCS vol. 12573 (MMM 2021 Proceedings Part II).

(Dec. 2020) Presentation on Misinformation on the internet: Video and AI is available in Slideshare; delivered at the Age of misinformation: an interdisciplinary outlook on fake news workshop/webinar.

(Oct. 2020) Presentations delivered and are available in Slideshare on Migration-Related Semantic Concepts for the Retrieval of Relevant Video Content @ INTAP 2020 and on GAN-based video summarization @ AI4Media Workshop on GANs for Media Content Generation.
(Sept. 2020) Two new and exciting research projects just started: the H2020 RIA "AI4Media: A European Excellence Centre for Media, Society and Democracy" and the Greek national project "QuaLiSID: Quality of Life Support System for People with Intellectual Disability".
(Aug. 2020) The Verification Plugin by InVID & WeVerify has reached 30,000+ users in chrome!
(July 2020) Tutorial on Video Summarization and Re-use Technologies and Tools delivered at the IEEE Int. Conf. on Multimedia and Expo (ICME), 6-10 July 2020. Slides are available for Part I: "Automatic video summarization".
(July 2020) Call for Papers, MMM 2021: 27th Int. Conf. on Multimedia Modeling, 25-27 January 2021, Prague, Czech Republic. Paper submision deadline (extended): 31 August 2020.
(June 2020) Invited talk "Video, AI and News: video analysis and verification technologies for supporting journalism" at the JOLT ETN virtual training event, June 2020.
(May 2020) Call for Papers, AI4TV 2020: 2nd Int. Workshop on AI for Smart TV Content Production, Access and Delivery @ the ACM Multimedia 2020, 12-16 October 2020, Seattle, USA. Paper submision deadline: 30 July 2020.
(Apr. 2020) New on-line video summarization demo. Submit your videos and let our SoA Generative Adversarial Learning method generate summaries for use in various social media channels. Watch a 2-minute tutorial video, and try the demo with your own videos.
(Mar. 2020) New edition of our on-line video analysis demo (v5.0), working with a new concept detection method and supporting the YouTube-8M concepts, is released. Try the demo and learn more about its technology.
(Mar. 2020) The Verification Plugin by InVID & WeVerify (as of Nov. 2019 in version v0.72) has reached 20,000+ users in chrome.
(Jan. 2020) Our MMM 2020 paper titled "Unsupervised Video Summarization via Attention-Driven Adversarial Learning" received the MMM2020 Best Paper Award. The paper's software and slides are available on github and slideshare, respectively.
(Jan. 2020) Software and slides for our MMM 2020 paper titled "Subclass deep neural networks: re-enabling neglected classes in deep network training for multimedia classification" are available on github and slideshare, respectively.
(Nov. 2019) The slides of our invited talk on “Video & AI: capabilities and limitations of AI in detecting video manipulations” at the Int. Conf. on "Disinformation in Cyberspace: Media literacy meets Artificial Intelligence", organized as part of the Media Literacy Week 2019 in Athens, Greece, on Nov. 15, 2019, are available on slideshare.
(Nov. 2019) Our presentation on "Implementing artificial intelligence strategies for content annotation and publication online" at the FIAT/IFTA 2019 World Conference is available on slideshare.
(Nov. 2019) The presentation of our paper "A Stepwise, Label-based Approach for Improving the Adversarial Training in Unsupervised Video Summarization" is available on slideshare.
(Nov. 2019) The proceedings of the 1st Int. Workshop on AI for Smart TV Content Production, Access and Delivery, AI4TV 2019 @ ACM Multimedia 2019, are available in the ACM Digital Library. The workshop summary paper is also avaialble here.
(Sept. 2019) Our new book, "Video Verification in the Fake News Era", has just been published by Springer; check it out at DOI:10.1007/978-3-030-26752-0.

(Sept. 2019) "A new free platform can help us cut through today’s information overload" Article about the MOVING H2020 EU project and its results has been published on the CORDIS website. See https://cordis.europa.eu/project/rcn/199995/brief/en.

(Aug. 2019) 1st Int. Workshop on Data-driven Personalisation of Television (DataTV 2019), at the ACM Int. Conf. on Interactive Experiences for Television and Online Video (TVX 2019), June 5, 2019, Manchester UK. The workshop's proceedings are available here.
(July 2019) New release (v4.0) of our video analysis service (updated analysis algorithms; simpler UI) is online; Watch a tutorial video about it and then try the service yourself with your own videos.
(June 2019) "Is that news video fake?" Article about the InVID H2020 EU project and its results has been published on the CORDIS website. See https://cordis.europa.eu/project/rcn/199134/brief/en.
(June 2019) A new and exciting H2020 Research and Innovation Action, MIRROR, just started!
(May 2019) Int. Workshop on AI for Smart TV Content Production, Access and Delivery, AI4TV 2019 @ ACM Multimedia 2019, 21-25 October 2019, Nice, France. Call for papers is available here. Paper submissions due 8 July 2019.
(Mar. 2019) The InVID project was successfully completed, and the InVID Verification Plugin (as of Jan. 2019 in version v0.68) has exceeded 10,000 users, and growing.
(Jan. 2019) Our latest lecture video fragmentation method, [mmm19d], is now used in the popular VideoLectures.NET platform for allowing its users to access specific fragments of lectures.
(Jan. 2019) The proceedings of the MMM 2019 conference are now available as Springer LNCS vol. 11295 (MMM 2019 Proceedings Part I) and LNCS vol. 11296 (MMM 2019 Proceedings Part II).

(Dec. 2018) 25th Int. Conf. on Multimedia Modeling (MMM 2019), January 8-11, 2019, Thessaloniki, Greece: The final conference program is now available at conference program page.

(Oct. 2018) The InVID Verification Plugin (as of Sept. 2018 in version v0.64) has exceeded 6,000 users (doubling its user-base in the last 6 months).
(Sept. 2018) The proceedings of IVMSP 2018 - IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop - are available in IEEE Xplore.
(Apr. 2018) The InVID Verification Plugin for video verification in the browser (as of Jan. 2018 in version v0.60), has exceeded 3,000 users! Congratulations InVID!
(Mar. 2018) Our new book, "Personal Multimedia Preservation: Remembering or Forgetting Images and Video" (Springer Series on Cultural Computing), edited by Vasileios Mezaris, Claudia Niederée, and Robert H. Logie, has been published.

(Feb. 2018) 25th International Conference on Multimedia Modeling (MMM 2019), January 8-11, 2019, Thessaloniki, Greece. Workshop/Special Session proposals deadline: May 15, 2018; Paper submission deadline: July 15, 2018. MMM 2019 conference infomation and call for contributions.

(Jan. 2018) A new and exciting H2020 Research and Innovation Action, ReTV, just started!
(Jan. 2018) IEEE Image, Video, and Multidimensional Signal Processing Workshop (IVMSP 2018), June 10-12, 2018, Zagori, Aristi Village, Greece. Paper submission deadline (extended): February 19, 2018. IVMSP 2018 conference infomation and call for papers.
(Nov. 2017) Our paper on "Linear Maximum Margin Classifier for Learning from Uncertain Data" has been accepted for publication in the IEEE Transactions on Pattern Analysis and Machine Intelligence. More details and links in our publications webpage.
(Oct. 2017) New release of the InVID Verification Plugin, for video verification in the browser. Get it now from here!
(Oct. 2017) The program of MultiEdTech 2017: 1st Int. Workshop on Educational and Knowledge Technologies @ ACM Multimedia (October 23-27, 2017, Mountain View, CA, USA) is now available online.
(Oct. 2017) The program of MuVer 2017: 1st Int. Workshop on Multimedia Verification @ ACM Multimedia (October 23-27, 2017, Mountain View, CA, USA) is now available online.
(July 2017) InVID made an open beta release of its Verification Plugin, to help journalists save time and debunk more effectively fake video news. More details and downloading of the plugin here.
(July 2017) Our special Issue on "Deep Learning for Mobile Multimedia" has been published in the ACM Transactions on Multimedia Computing, Communication and Applications (TOMM): vol. 13, no. 3s, June 2017.
(June 2017) Best Poster Award at the ACM Int. Conf. on Multimedia Retrieval (ICMR 2017, Bucharest, Romania) for the paper "Concept Language Models and Event-based Concept Number Selection for Zero-example Event Detection", authored by D. Galanopoulos, F. Markatopoulou, V. Mezaris and I. Patras.
(Apr. 2017) New version v1.4.4 of CERTH's real-time video shot and scene segmentation software was publicly released in April. Check out the downloads page for details and links.
(Apr. 2017) New data release: we released concept detection scores (487 sports-related concepts and 345 TRECVID SIN concepts) for the MED16train dataset used in the TRECVID MED Task. See our downloads page for details and links.
(Mar. 2017) New data release: we released concept detection scores for the IACC.3 dataset used in the TRECVID AVS Task from 2016 and on (600 hr of internet archive videos). See our downloads page for details and links.
(Jan. 2017) A new and exciting H2020 Innovation Action, EMMA, just started!
(Oct. 2016) Our special Issue on "Multimedia in Ecology" has been published in Multimedia Systems (Springer): vol. 22, no. 6, pp. 709-782, Nov. 2016.
(Oct. 2016) We successfully participated in the Ad-hoc Video Search (AVS) and Multimedia Event Detection (MED) tasks of TRECVID 2016. In AVS, our best fully-automatic run was ranked 2nd-best (MXinfAP=0.051, compared to 0.054 and 0.040 for the 1st and 3rd best-performing teams using fully-automatic systems). In MED we also achieved very good results, with our best run reaching MInfAP@200=0.475.
(Sept. 2016) Our special Issue on "Fine-grained categorization in ecological multimedia" has been published in Pattern Recognition Letters (Elsevier): vol. 81, pp. 51-117, Oct. 2016.
(Feb. 2016) Accelerated Kernel Subclass Discriminant Analysis (AKSDA) and SVM combination: An efficient GPU-accelerated dimensionality reduction and classification method, for very high-dimensional data. Achieve SoA classification results, consistently higher than Kernel SVM approaches, at orders-of-magnitude shorter training times. More info and software download here.
(Jan. 2016) Two new and exciting H2020 projects, InVID and MOVING, just started or are about to start!
(Sept. 2015) New interactive on-line video analysis service: you upload videos via a web interface, and it performs shot/scene segmentation and visual concept detection (several times faster than real-time; uses our new concept detection engine). Results are displayed in an interactive user interface, which allows navigating through the video structure (shots, scenes), viewing the concept detection results for each shot, and searching by concepts within the video. Try this service now!
(Sept. 2015) Updated machine-to-machine video analysis service: a REST service receiving a video's URL, performing temporal segmentation to shots and scenes as well as visual concept detection, and returning the results in a machine-processable MPEG-7-compliant XML file. Updated in Sept. 2015 with our new concept detection engine. A short video-demo: http://youtu.be/FLTRZu-V97I. You can try this service (email us, mentioning your name and university / company affiliation, to receive a link and instructions).
(May 2015) Call for Papers, 2nd ACM International Workshop on Human-centered Event Understanding from Multimedia (HuEvent15) at the 23rd ACM Multimedia Conference (ACM MM 2015), Brisbane, Australia, 26-30 October 2015. Submission deadline: June 30, 2015.
(May 2015) The program and keynote of the Human Memory-Inspired Multimedia Organization and Preservation (HMMP15) Workshop @ IEEE ICME 2015, Torino, Italy, July 3, 2015, is announced here.
(Feb. 2015) Call for Papers, Human Memory-Inspired Multimedia Organization and Preservation (HMMP15) Workshop @ IEEE ICME 2015, Torino, Italy, June 29 - July 3, 2015. Submission deadline: March 30, 2015.
(Feb. 2015) Call for Papers, Special Issue on "Fine-grained Categorization in Ecological Multimedia", Pattern Recognition Letters, Elsevier. Submission deadline: March 15, 2015.
(Nov. 2014) A new, very fast machine learning method for big data problems was developed. This is a generic learning method, applicable to a wide range of problems that involve learning from big data. Tested in large-scale multimedia event detection problems, it produces better results than both Kernel- and Linear-SVMs, while its training is one or two orders of magnitude faster than that of Linear-SVMs. Check out our TRECVID presentation on this.
(Nov. 2014) Tutorials on "Video Hyperlinking" and related video analysis technologies were delivered at the IEEE Int. Conf. on Image Processing (ICIP'14), Paris, France, Oct. 2014, and at ACM Multimedia (MM'14), Orlando, FL, USA, Nov. 2014. The tutorials' slides are available in slideshare, in three parts:
- Part A: Motivation and Vision.
- Part B: Video Fragment Creation and Annotation for Hyperlinking.
- Part C: Insights into Hyperlinking Video Content.
(July 2014) New dataset (CERTH Image Blur Dataset) was publicly released during the last couple of weeks. Check out the downloads page for details and links.
(May 2014) The Proceedings of the 1st International Workshop on Social Events in Web Multimedia @ ACM International Conference on Multimedia Retrieval (ICMR 2014) (Glasgow, UK, April 1-4, 2014) are available here.
(Apr. 2014) Call for Papers, Human-centered Event Understanding from Multimedia (HuEvent14) @ ACM Multimedia 2014, Orlando, FL, USA, November 3-7, 2014. Submission deadline: June 30, 2014.
(Apr. 2014) Call for Papers, 3rd ACM Intl. Regular & Data Challenge Workshop on Multimedia Analysis for Ecological Data (MAED 2014) @ ACM Multimedia 2014, Orlando, FL, USA, November 3-7, 2014. Submission deadline: June 30, 2014. The Challenge dataset is available.
(Apr. 2014) Call for Multimedia Grand Challenge Solutions: VideoLectures.NET Challenge (MediaMixer, transLectures): Temporal segmentation and annotation of lecture videos @ ACM Multimedia 2014, Orlando, FL, USA, November 3-7, 2014. Submission deadline: June 29, 2014. The Challenge dataset is already available (ask for access details).
(Apr. 2014) Call for Participation, Social Event Detection (SED) and Synchronization of Event Media (SEM) benchmarking tasks at MediaEval 2014. Early registration (by May 1st, 2014) is encouraged!
(Jan. 2014) Best Paper Award at the 20th International Conference on MultiMedia Modeling (MMM'14, Dublin, Ireland) for the paper "A Comparative Study on the Use of Multi-Label Classification Techniques for Concept-Based Video Indexing and Annotation", authored by Fotini Markatopoulou, Vasileios Mezaris and Ioannis Kompatsiaris.
(Jan. 2014) Tutorial on Re-using Media On The Web at the 23rd International World Wide Web Conference (WWW'14), Seoul, Korea, April 7-11, 2014.

© 2015-2024 Vasileios Mezaris