Name: | Description: | Size: | Format: | |
---|---|---|---|---|
433.47 KB | Adobe PDF |
Authors
Advisor(s)
Abstract(s)
Métricas são cotidianamente utilizadas para avaliar e comparar grandezas de distintas naturezas. Em medidas básicas, como distâncias expressas em metros e períodos de tempos expressos em minutos, ou em situações envolvendo múltiplas variáveis, como a decisão de uma instituição financeira em oferecer crédito a um cliente, as métricas estão presentes nas mais diversas avaliações e tomadas de decisão.
Em ciência não é diferente. Toda a pesquisa científica é constantemente avaliada, seja na fase de conceção, durante a busca por financiamento, seja na divulgação de seus resultados, por meio das publicações em revistas científicas. Pesquisadores também são constantemente avaliados, em solicitações de auxílio, em concursos ou em avaliações para promoção.
A avaliação da ciência tem sido focada nas publicações dos resultados de pesquisa. Inicialmente era usual ter como métrica a simples quantidade de publicações de um pesquisador. Desde o advento da tecnologia que permitiu registrar e processar as citações em artigos científicos, contabilizar a quantidade de citações atribuídas a cada artigo, tornou-se a base para um amplo elenco de indicadores para avaliar a relevância de pesquisas, de pesquisadores, de revistas científicas, de instituições e até de países.
Assim como o próprio modelo de ciência, as métricas para avaliação de pesquisas com base na contagem de publicações e de citações têm sido alvo de críticas. Em relação à contagem de publicações, questionamentos são colocados em virtude do surgimento de revistas científicas «predadoras» e por práticas como fatiar resultados de pesquisa até a menor unidade publicável. No que se refere à utilização de citações, não é claro que toda a referência a um outro trabalho de pesquisa seja um endosso de sua relevância. Com o modelo corrente de acesso pago a artigos científicos há ainda fortes interesses comerciais ligados a ter um trabalho ou uma revista com maior quantidade de citações. Ademais, também há relatos de práticas antiéticas no sentido de induzir melhores medidas baseadas em citações não apenas para pesquisadores, mas também para revistas científicas.
Em contraposição ao padrão competitivo de pesquisas induzido pelo modelo tradicional da ciência, o modelo de Ciência Aberta valoriza a cooperação entre pesquisadores. Desse modo, a ênfase da divulgação da pesquisa não está concentrada na publicação final dos resultados de pesquisa, mas pode envolver também outras etapas, como o compartilhamento de dados e a própria elaboração das publicações referentes aos resultados. Assim, são necessárias novas métricas que permitam avaliar não apenas o alcance da divulgação pública das pesquisas, mas também o próprio processo da pesquisa. Neste trabalho analisaremos por que as métricas tradicionais para a ciência não são adequadas para o modelo de Ciência Aberta. Também apresentaremos algumas métricas alternativas viabilizadas pelas tecnologias atuais que vêm sendo propostas para esse modelo, com base na avaliação direta da sociedade e na repercussão dos trabalhos em redes sociais.
Metrics are daily used to evaluate and compare quantities of different natures. In basic measures, such as distances expressed in meters and times expressed in minutes, or involving multiple variables, such as the decision of a financial institution to offer credit to a customer, metrics are present in the most diverse evaluation and decision-making situations. In science it is no different. All scientific research is constantly evaluated, either at the conception stage, during the search for funding, or in the dissemination of its results, through publications in scientific journals. Researchers are also constantly evaluated, in aid applications, in contests or in evaluations for promotion. Evaluation of science has been focused on the publication of research results. Initially, it was usual to have the simple amount of publications of a researcher as a metric. Since the advent of technology to record and process citations in scientific articles, accounting for the number of citations attributed to each article has become the basis for a broad set of indicators to assess the relevance of research, researchers, scientific journals, institutions and even countries. Like the science model itself, metrics for the evaluation of research based on the number of publications and citations have been criticized. In relation to the counting of publications, questions are posed due to the emergence of "predatory" scientific journals and practices such as slicing research results down to the smallest publishable unit. Regarding the use of citations, it is not clear that every reference to another research paper is an endorsement of its relevance. With the current model of paid access to scientific articles, there are still strong commercial interests involved in having a paper or journal with more citations. In addition, there are also reports of unethical practices to induce better measures based on citations not only for researchers, but also for scientific journals. In contrast to the competitive pattern of research induced by the traditional model of science, the Open Science model values cooperation among researchers. Thus, the emphasis of the dissemination of the research is not focused exclusively on the final publication of the research results, but may also involve other steps, such as data sharing and the preparation of the publications related to the results. Thus, new metrics are needed to assess not only the reach of the research results, but also the research process itself. In this work, we will analyze why traditional metrics for science are not adequate for the Open Science model. We will also present some alternative metrics made feasible by the current technologies that have been proposed for this model, based on the direct evaluation of the research by the society and the repercussion of the research results on social networks.
Metrics are daily used to evaluate and compare quantities of different natures. In basic measures, such as distances expressed in meters and times expressed in minutes, or involving multiple variables, such as the decision of a financial institution to offer credit to a customer, metrics are present in the most diverse evaluation and decision-making situations. In science it is no different. All scientific research is constantly evaluated, either at the conception stage, during the search for funding, or in the dissemination of its results, through publications in scientific journals. Researchers are also constantly evaluated, in aid applications, in contests or in evaluations for promotion. Evaluation of science has been focused on the publication of research results. Initially, it was usual to have the simple amount of publications of a researcher as a metric. Since the advent of technology to record and process citations in scientific articles, accounting for the number of citations attributed to each article has become the basis for a broad set of indicators to assess the relevance of research, researchers, scientific journals, institutions and even countries. Like the science model itself, metrics for the evaluation of research based on the number of publications and citations have been criticized. In relation to the counting of publications, questions are posed due to the emergence of "predatory" scientific journals and practices such as slicing research results down to the smallest publishable unit. Regarding the use of citations, it is not clear that every reference to another research paper is an endorsement of its relevance. With the current model of paid access to scientific articles, there are still strong commercial interests involved in having a paper or journal with more citations. In addition, there are also reports of unethical practices to induce better measures based on citations not only for researchers, but also for scientific journals. In contrast to the competitive pattern of research induced by the traditional model of science, the Open Science model values cooperation among researchers. Thus, the emphasis of the dissemination of the research is not focused exclusively on the final publication of the research results, but may also involve other steps, such as data sharing and the preparation of the publications related to the results. Thus, new metrics are needed to assess not only the reach of the research results, but also the research process itself. In this work, we will analyze why traditional metrics for science are not adequate for the Open Science model. We will also present some alternative metrics made feasible by the current technologies that have been proposed for this model, based on the direct evaluation of the research by the society and the repercussion of the research results on social networks.
Description
Keywords
Bibliometria Ciência Aberta Métricas alternativas
Citation
Publisher
Associação Portuguesa de Documentação e Informação de Saúde