Advent
Calendar
10
11
12
13
14
15
16
17
18
20
19
21
22
23
24
23
Advent
Calendar
Responsible assessment does not reject responsible metrics; it places them within an ethical, transparent, and contextual framework. Qualitative judgement, diverse evidence, and careful indicator design and weighing work together to create more insightful, fairer, and meaningful evaluations. The goal is not fewer metrics, but better, more comprehensive and responsible ones supported by qualitative input and informed judgement.
Advent
Calendar
A single language narrows who can produce, access, and apply knowledge. Multilingual science increases rigor, reach, and societal relevance by enabling diverse communities to contribute and use results.
Advent
Calendar
Publication and citation indicators often under-represent non-English venues and locally relevant research. This compromises equal opportunities for researchers and institutions.
Advent
Calendar
All evaluation is subjective, as it is based on people (reading proposals, asking questions and listening to the answers, etc.) and making conclusions based on the information that they have. These conclusions are influenced by their knowledge, expertise, and competences, but also by their environment (e.g. blood sugar and mental state), conscious and unconscious biases, and many other things. Instead of aiming for objective decision making, we need to be comfortable with subjectivity and build evaluation processes that take find the best research taking advantage of this subjectivity.
Advent
Calendar
Rankings rely on selective indicators, subjective opinions, and arbitrary weights that privilege certain types of universities, especially older, wealthier and research-intensive ones located in English-speaking countries, while making the rest look less worthy.
Advent
Calendar
Language isn’t a proxy for excellence. High-quality work can be published in any language, and all fields need multilingualism for communication within and/or beyond academia.
Advent
Calendar
Students have always been able to choose universities using many other sources (e.g., programs, location, cost, and personal recommendations), long before rankings existed. Moreover, the kind of information rankings offer often mislead students in their decision making.
Advent
Calendar
At universities where less emphasis is placed on scholarly publications, research results still find their way smoothly into scientific journals. Many researchers are driven by a passion to share their findings with colleagues and thus contribute to the advancement of scientific knowledge.
Students have always been able to choose universities using many other sources (e.g., programs, location, cost, and personal recommendations), long before rankings existed. Moreover, the kind of information rankings offer often mislead students in their decision making.
Advent
Calendar
Citations can be influenced by popularity, field size, or trends. They do not necessarily reflect originality or societal impact. Combine metrics with expert judgment for fairness.
Advent
Calendar
10
Evidence shows that diverse teams produce higher-impact science, and bias-free evaluation improves the fairness and quality of research.
Advent
Calendar
11
Rankings often lead universities to chase indicators, manipulate data, misrepresent key statistics, and fabricate false information about their performances, rather than to actually improve teaching, research conditions, or societal contributions.
Advent
Calendar
12
Peer review and bibliometric indicators are not free from bias. Gender, career interruptions, and networking influence outcomes significantly.
Advent
Calendar
13
Far from it! While disciplinary experts in specific problem areas should be part of a review panel, individuals experienced in TDR, multidisciplinary generalists, and even non-scientific stakeholders should also serve as key reviewers.
Advent
Calendar
14
Ranking companies are profit-driven consultancies, whose interests lie in maximizing gain from turning university trust and data into business intelligence products and services. Also, they are notoriously non-transparent in the way they calculate scores and ranks.
Advent
Calendar
15
Effective and fair research assessment requires leadership and coordination from research managers, administrators, and governance bodies. Their role in designing transparent processes, ensuring inclusivity, and aligning assessment with institutional values is essential for systemic change.
Advent
Calendar
16
False! While scientific outputs remain important in TDR, other societal outputs, outcomes, and impacts—such as economic, ecological, and health-related—should also be considered.
Advent
Calendar
17
While narratives require more reflection than ticking boxes, they provide richer, fairer insights into contributions that metrics alone miss. Investing time in qualitative assessment fosters transparency and recognizes diverse achievements.
Advent
Calendar
18
AI tools will be increasingly required to give us insight into the range, depth, and quality of the work being assessed. It may help us uncover embedded assumptions, biases, and design choices in science and academia that have led to the exclusion of many while promoting others who propagate these prejudices. However, transparent methods, ethical safeguards, and human oversight remain our best protection against misrepresenting researchers’ contributions and disproportionately disadvantage some communities. Responsible assessment requires openness, explainability, and continual auditing of automated methods.
Advent
Calendar
17
Choi et al 2025 (DOI: 10.1111/cobi.14391) show that in biodiversity conservation low IF journals bring region-specific science that underpins actionable conservation laws
Advent
Calendar
20
To evaluate well you need to navigate through knowledge.
Advent
Calendar
21
Peer reviewers are experts who can share the added value of their knowledge.
Advent
Calendar
22
Null/negative findings, replication, and careful “what didn’t work” evidence reduce waste, prevent repetition, and strengthen the knowledge base. Assessment should reward robustness, transparency, and learning, not just “exciting” outcomes
Advent
Calendar
23
23
Excellence is contextual. Responsible assessment needs an open, inclusive conversation about what quality means for your mission, disciplines, and research cultures, and then evaluates accordingly (not by defaulting to inherited signals).
Advent
Calendar
24
Sure, they help. But fundamental change happens through people. Assessment changes when people change it: leaders who align incentives, communities that learn from each other, and evaluators who apply judgement fairly and consistently. The most successful reforms are co-built with researchers and staff (across disciplines and career stages), supported over time, and embedded into everyday practice.
Students need rankings to choose where to study
False! See why
16
Institutions and research funders should focus primarily on scientific outputs when assessing transdisciplinary research (TDR)
False! See why
Responsible research assessment is only about limiting the use of metrics.
False! See why
21
To be a peer reviewer you must evaluate among your disciplinary colleagues
False! See why
Write an awesome title
Responsible research assessment is only about limiting the use of metrics.
False! See why
Quantitative assessment is "objective" and "fact based", while qualitative assessment is "subjective" and "opinion based".
False! See why
Metrics are neutral.
False! See why
18
AI will provide the objective and fairer tools needed to reform research assessment globally
False! See why
16
Write an awesome title
Do you feel that your text still lacks something? Add animation to captivate your audience.
English equals excellence.
False! See why
‘Your content is good, but it’ll engage much more if it’s interactive.’
- Genially
23
23
Write an awesome title
Do you feel like your text is still missing something? Add animation to capture your audience.
Science needs only one language
False! See why
If evaluations consider a broad range of outcomes and outputs, researchers might publish less (or not at all) and consequently research results will find their way to the academic community more slowly.
False! See why
20
To be a good evaluator you need to know everything of a very specific topic
False! See why
20
20
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are able to understand images from millions of years ago, even from other cultures.
An awesome title
Disciplines such as Visual Thinking facilitate the taking of visually rich notes through the use of images, graphs, infographics, and simple drawings. Go for it!
17
Write an awesome title
Do you feel like your text is still missing something? Add animation to captivate your audience.
14
Ranking companies are independent evaluators dedicated to informing the public about university quality
False! See why
15
Research assessment is solely the responsibility of researchers and evaluators
False! See why
19
‘Your content is good, but it’ll engage much more if it’s interactive.’
- Genially
10
Inclusive practices "lower the bar"
False! See why
11
Rankings stimulate universities to improve
False! See why
24
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are capable of understanding images from millions of years ago, even from other cultures.
14
Write an awesome title
Do you feel like your text still needs something? Add animation to captivate your audience.
21
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are able to understand images from millions of years ago, even from other cultures.
Citation counts show the true value of research
False! See why
Global rankings are an objective measure of university quality
False! See why
10
‘Use this space to write a quote. And remember: always name the author’
- Author's name
13
An awesome title
Disciplines such as Visual Thinking facilitate the creation of visually rich notes through the use of images, graphs, infographics, and simple drawings. Go for it!
15
12
Science is viewed as neutral and free of bias
False! See why
18
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are capable of understanding images from millions of years ago, even from other cultures.
17
Narrative evaluations are too time-consuming to be practical
False! See why
12
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are capable of understanding images from millions of years ago, even from other cultures.
13
Transdisciplinary research (TDR) can be fairly evaluated at the proposal stage by standard disciplinary review panels
False! See why
19
articles in high IF (impact factor) journals have more impact on the ground
False! See why
22
23
23
“Excellence” is obvious and universal, everyone knows it when they see it
False! See why
24
Research assessment can be improved through a new policy, a new model, or a new tool
False! See why
22
Only “positive” results are valuable
False! See why
Advent
YERUN Brussels
Created on November 24, 2025
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Practical Microsite
View
Essential Microsite
View
Akihabara Microsite
View
Essential CV
View
Akihabara Resume
View
Corporate CV
View
Interactive Onboarding Guide
Explore all templates
Transcript
Advent
Calendar
10
11
12
13
14
15
16
17
18
20
19
21
22
23
24
23
Advent
Calendar
Responsible assessment does not reject responsible metrics; it places them within an ethical, transparent, and contextual framework. Qualitative judgement, diverse evidence, and careful indicator design and weighing work together to create more insightful, fairer, and meaningful evaluations. The goal is not fewer metrics, but better, more comprehensive and responsible ones supported by qualitative input and informed judgement.
Advent
Calendar
A single language narrows who can produce, access, and apply knowledge. Multilingual science increases rigor, reach, and societal relevance by enabling diverse communities to contribute and use results.
Advent
Calendar
Publication and citation indicators often under-represent non-English venues and locally relevant research. This compromises equal opportunities for researchers and institutions.
Advent
Calendar
All evaluation is subjective, as it is based on people (reading proposals, asking questions and listening to the answers, etc.) and making conclusions based on the information that they have. These conclusions are influenced by their knowledge, expertise, and competences, but also by their environment (e.g. blood sugar and mental state), conscious and unconscious biases, and many other things. Instead of aiming for objective decision making, we need to be comfortable with subjectivity and build evaluation processes that take find the best research taking advantage of this subjectivity.
Advent
Calendar
Rankings rely on selective indicators, subjective opinions, and arbitrary weights that privilege certain types of universities, especially older, wealthier and research-intensive ones located in English-speaking countries, while making the rest look less worthy.
Advent
Calendar
Language isn’t a proxy for excellence. High-quality work can be published in any language, and all fields need multilingualism for communication within and/or beyond academia.
Advent
Calendar
Students have always been able to choose universities using many other sources (e.g., programs, location, cost, and personal recommendations), long before rankings existed. Moreover, the kind of information rankings offer often mislead students in their decision making.
Advent
Calendar
At universities where less emphasis is placed on scholarly publications, research results still find their way smoothly into scientific journals. Many researchers are driven by a passion to share their findings with colleagues and thus contribute to the advancement of scientific knowledge.
Students have always been able to choose universities using many other sources (e.g., programs, location, cost, and personal recommendations), long before rankings existed. Moreover, the kind of information rankings offer often mislead students in their decision making.
Advent
Calendar
Citations can be influenced by popularity, field size, or trends. They do not necessarily reflect originality or societal impact. Combine metrics with expert judgment for fairness.
Advent
Calendar
10
Evidence shows that diverse teams produce higher-impact science, and bias-free evaluation improves the fairness and quality of research.
Advent
Calendar
11
Rankings often lead universities to chase indicators, manipulate data, misrepresent key statistics, and fabricate false information about their performances, rather than to actually improve teaching, research conditions, or societal contributions.
Advent
Calendar
12
Peer review and bibliometric indicators are not free from bias. Gender, career interruptions, and networking influence outcomes significantly.
Advent
Calendar
13
Far from it! While disciplinary experts in specific problem areas should be part of a review panel, individuals experienced in TDR, multidisciplinary generalists, and even non-scientific stakeholders should also serve as key reviewers.
Advent
Calendar
14
Ranking companies are profit-driven consultancies, whose interests lie in maximizing gain from turning university trust and data into business intelligence products and services. Also, they are notoriously non-transparent in the way they calculate scores and ranks.
Advent
Calendar
15
Effective and fair research assessment requires leadership and coordination from research managers, administrators, and governance bodies. Their role in designing transparent processes, ensuring inclusivity, and aligning assessment with institutional values is essential for systemic change.
Advent
Calendar
16
False! While scientific outputs remain important in TDR, other societal outputs, outcomes, and impacts—such as economic, ecological, and health-related—should also be considered.
Advent
Calendar
17
While narratives require more reflection than ticking boxes, they provide richer, fairer insights into contributions that metrics alone miss. Investing time in qualitative assessment fosters transparency and recognizes diverse achievements.
Advent
Calendar
18
AI tools will be increasingly required to give us insight into the range, depth, and quality of the work being assessed. It may help us uncover embedded assumptions, biases, and design choices in science and academia that have led to the exclusion of many while promoting others who propagate these prejudices. However, transparent methods, ethical safeguards, and human oversight remain our best protection against misrepresenting researchers’ contributions and disproportionately disadvantage some communities. Responsible assessment requires openness, explainability, and continual auditing of automated methods.
Advent
Calendar
17
Choi et al 2025 (DOI: 10.1111/cobi.14391) show that in biodiversity conservation low IF journals bring region-specific science that underpins actionable conservation laws
Advent
Calendar
20
To evaluate well you need to navigate through knowledge.
Advent
Calendar
21
Peer reviewers are experts who can share the added value of their knowledge.
Advent
Calendar
22
Null/negative findings, replication, and careful “what didn’t work” evidence reduce waste, prevent repetition, and strengthen the knowledge base. Assessment should reward robustness, transparency, and learning, not just “exciting” outcomes
Advent
Calendar
23
23
Excellence is contextual. Responsible assessment needs an open, inclusive conversation about what quality means for your mission, disciplines, and research cultures, and then evaluates accordingly (not by defaulting to inherited signals).
Advent
Calendar
24
Sure, they help. But fundamental change happens through people. Assessment changes when people change it: leaders who align incentives, communities that learn from each other, and evaluators who apply judgement fairly and consistently. The most successful reforms are co-built with researchers and staff (across disciplines and career stages), supported over time, and embedded into everyday practice.
Students need rankings to choose where to study
False! See why
16
Institutions and research funders should focus primarily on scientific outputs when assessing transdisciplinary research (TDR)
False! See why
Responsible research assessment is only about limiting the use of metrics.
False! See why
21
To be a peer reviewer you must evaluate among your disciplinary colleagues
False! See why
Write an awesome title
Responsible research assessment is only about limiting the use of metrics.
False! See why
Quantitative assessment is "objective" and "fact based", while qualitative assessment is "subjective" and "opinion based".
False! See why
Metrics are neutral.
False! See why
18
AI will provide the objective and fairer tools needed to reform research assessment globally
False! See why
16
Write an awesome title
Do you feel that your text still lacks something? Add animation to captivate your audience.
English equals excellence.
False! See why
‘Your content is good, but it’ll engage much more if it’s interactive.’
- Genially
23
23
Write an awesome title
Do you feel like your text is still missing something? Add animation to capture your audience.
Science needs only one language
False! See why
If evaluations consider a broad range of outcomes and outputs, researchers might publish less (or not at all) and consequently research results will find their way to the academic community more slowly.
False! See why
20
To be a good evaluator you need to know everything of a very specific topic
False! See why
20
20
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are able to understand images from millions of years ago, even from other cultures.
An awesome title
Disciplines such as Visual Thinking facilitate the taking of visually rich notes through the use of images, graphs, infographics, and simple drawings. Go for it!
17
Write an awesome title
Do you feel like your text is still missing something? Add animation to captivate your audience.
14
Ranking companies are independent evaluators dedicated to informing the public about university quality
False! See why
15
Research assessment is solely the responsibility of researchers and evaluators
False! See why
19
‘Your content is good, but it’ll engage much more if it’s interactive.’
- Genially
10
Inclusive practices "lower the bar"
False! See why
11
Rankings stimulate universities to improve
False! See why
24
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are capable of understanding images from millions of years ago, even from other cultures.
14
Write an awesome title
Do you feel like your text still needs something? Add animation to captivate your audience.
21
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are able to understand images from millions of years ago, even from other cultures.
Citation counts show the true value of research
False! See why
Global rankings are an objective measure of university quality
False! See why
10
‘Use this space to write a quote. And remember: always name the author’
- Author's name
13
An awesome title
Disciplines such as Visual Thinking facilitate the creation of visually rich notes through the use of images, graphs, infographics, and simple drawings. Go for it!
15
12
Science is viewed as neutral and free of bias
False! See why
18
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are capable of understanding images from millions of years ago, even from other cultures.
17
Narrative evaluations are too time-consuming to be practical
False! See why
12
Write an awesome title
Visual content is a cross-cutting, universal language, like music. We are capable of understanding images from millions of years ago, even from other cultures.
13
Transdisciplinary research (TDR) can be fairly evaluated at the proposal stage by standard disciplinary review panels
False! See why
19
articles in high IF (impact factor) journals have more impact on the ground
False! See why
22
23
23
“Excellence” is obvious and universal, everyone knows it when they see it
False! See why
24
Research assessment can be improved through a new policy, a new model, or a new tool
False! See why
22
Only “positive” results are valuable
False! See why