22nd International Conference on Informatics and Information Technologies – CIIT 2025

22nd International Conference on Informatics and Information Technologies – CIIT 2025

por
9 9 people viewed this event.

The International Conference on Informatics and Information Technologies is the 22nd of the series of conferences organized by the Faculty of Computer Science and Engineering (FCSE). As part of the conference, the National Competence Center of North Macedonia, in collaboration with the HE ERA Chair AutoLearn-SI, HE MSCA PF AutoLLMSelect, and with support from the Slovenian AI Factory, has organized and will host the following presentations on April 25, 2025, starting at 16:30:

Title: Leveraging Benchmarking Data for Automated Optimization
Speaker: Tome Eftimov
Affiliation: Jozef Stefan Institute, Ljubljana, Slovenia

At the start of 2022, the evolutionary computation community published a call for action highlighting significant issues with metaphor-based metaheuristics in black-box optimization (BBO): useless metaphors, limited novelty, and biased experimental validation. This talk presents recent benchmarking advances for robust and reliable results and meta-learning approaches for algorithm selection. We focus on two methods: i) selecting representative data instances to generalize study findings, and ii) using algorithm footprints to identify easy or challenging problem instances based on landscape characteristics. Ultimately, the goal is a paradigm shift toward reducing resource waste and duplicated efforts, accelerating progress, and enabling effective automated algorithm configuration and selection through transferable insights.

——————————————–

Title: Robust and Interpretable Large Language Model Ranking Based on User Preferences
Speaker: Ana Gjorgjevikj
Affiliation: Jozef Stefan institute, Ljubljana, Slovenia

This talk presents transparent benchmarking scenarios for large language models (LLMs), enabling users to evaluate models based on their specific needs, such as effectiveness, hardware constraints, and application demands. Using a well-established multi-criteria decision method, we generate benchmarking insights that reflect user preferences, assuming initial steps like benchmark dataset selection and LLM portfolio definition are already completed. LLMs are assessed across selected datasets by balancing performance and resource usage, with user feedback incorporated into performance metrics when relevant. Through two experiments—one aggregating performance across datasets and another combining multiple metrics on a single dataset—we show how user priorities influence interpretable and robust LLM rankings. This approach strengthens the relevance of benchmarking results and supports seamless integration into benchmarking platforms.

Link for remote participants: www.teams.microsoft.com

To register for this event please visit the following URL:

 

Date And Time

25-04-25 @ 16:30 to
25-04-25 @ 17:30

Share With Friends