Name: | Description: | Size: | Format: | |
---|---|---|---|---|
2.05 MB | Adobe PDF |
Authors
Advisor(s)
Abstract(s)
Esta dissertação aborda a problemática da ética e moral na interoperabilidade de sistemas multiagentes,
destacando a necessidade de garantir que esses sistemas ajam de maneira ética e moralmente
responsável, respeitando princípios como justiça, transparência e responsabilidade. No âmbito da
crescente integração tecnológica e interconectividade entre sistemas autónomos, a questão da ética e
moral ganha destaque como uma preocupação central. No entanto, apesar da importância reconhecida,
há uma lacuna significativa na literatura em termos de como esses princípios éticos e morais devem ser
incorporados e aplicados em sistemas multiagentes interoperáveis. Esta lacuna torna-se ainda mais
evidente quando se considera a complexidade das interações entre agentes autónomos em ambientes
dinâmicos e heterogéneos. Assim, esta investigação visa preencher essa lacuna ao investigar e propor
uma abordagem eficaz para integrar considerações éticas e morais na interoperabilidade de sistemas
multiagentes. Foram desenvolvidos cenários competitivos e cooperativos para serem aplicados nos
Large Language Model ChatGPT, Gemini e Llama 2, com o objetivo de verificar a identificação ética
e restrição da conversa. Também foram treinados classificadores de Machine Learning para análise de
sentimentos e posterior aplicação na comunicação entre agentes. O Llama 2 destaca-se na restrição da
conversa, em ambos os cenários e idiomas com 6 restrições no idioma em inglês e 1 restrição no idioma
português. Na identificação da ética, para o cenário competitivo não há resultados consistentes. No
cenário cooperativo o Gemini destaca-se na identificação da ética. No treino dos classificadores, o
Random Forest apresenta um valor consistente no cenário cooperativo com duas classes, com uma
exatidão de 0.96 e uma AUC de 1. Com a utilização da Framework SPADE, conseguimos restringir a
conversa entre agentes, quando a ética não é detetada na conversa. Destaca-se os Random Forest pela
capacidade de acertar 6 classificações em 6 interações, no cenário cooperativo 4 Classes e com apenas
1 falha em todas as classificações, de todos os cenários e classes, da ética e moral. Deste modo, será
possível contribuir para o desenvolvimento de tecnologias mais responsáveis e alinhadas com os valores
humanos.
This dissertation addresses the issue of ethics and morality in the interoperability of multi-agent systems, highlighting the need to ensure that these systems act in an ethically and morally responsible manner, respecting principles such as justice, transparency, and accountability. In the context of increasing technological integration and interconnectivity between autonomous systems, the issue of ethics and morality emerges as a central concern. However, despite its recognized importance, there is a significant gap in the literature regarding how these ethical and moral principles should be incorporated and applied in interoperable multi-agent systems. This gap becomes even more evident when considering the complexity of interactions between autonomous agents in dynamic and heterogeneous environments. Thus, this research aims to fill this gap by investigating and proposing an effective approach to integrating ethical and moral considerations into the interoperability of multi-agent systems. Competitive and cooperative scenarios were developed to be applied in Large Language Models such as ChatGPT, Gemini, and Llama 2, with the objective of verifying ethical identification and conversation restriction. Machine Learning classifiers were also trained for sentiment analysis and subsequent application in communication between agents. Llama 2 stands out in conversation restriction, in both scenarios and languages, with 6 restrictions in English and 1 restriction in Portuguese. In ethical identification, there are no consistent results for the competitive scenario. In the cooperative scenario, Gemini stands out in ethical identification. In classifier training, Random Forest presents a consistent value in the cooperative scenario with two classes, with an accuracy of 0.96 and an AUC of 1. By using the SPADE Framework, we were able to restrict conversations between agents when ethics were not detected in the dialogue. Random Forest is noteworthy for its ability to achieve 6 correct classifications in 6 interactions in the cooperative 4-class scenario, with only 1 error across all classifications, scenarios, and classes of ethics and morality. In this way, it is possible to contribute to the development of more responsible technologies that are aligned with human values.
This dissertation addresses the issue of ethics and morality in the interoperability of multi-agent systems, highlighting the need to ensure that these systems act in an ethically and morally responsible manner, respecting principles such as justice, transparency, and accountability. In the context of increasing technological integration and interconnectivity between autonomous systems, the issue of ethics and morality emerges as a central concern. However, despite its recognized importance, there is a significant gap in the literature regarding how these ethical and moral principles should be incorporated and applied in interoperable multi-agent systems. This gap becomes even more evident when considering the complexity of interactions between autonomous agents in dynamic and heterogeneous environments. Thus, this research aims to fill this gap by investigating and proposing an effective approach to integrating ethical and moral considerations into the interoperability of multi-agent systems. Competitive and cooperative scenarios were developed to be applied in Large Language Models such as ChatGPT, Gemini, and Llama 2, with the objective of verifying ethical identification and conversation restriction. Machine Learning classifiers were also trained for sentiment analysis and subsequent application in communication between agents. Llama 2 stands out in conversation restriction, in both scenarios and languages, with 6 restrictions in English and 1 restriction in Portuguese. In ethical identification, there are no consistent results for the competitive scenario. In the cooperative scenario, Gemini stands out in ethical identification. In classifier training, Random Forest presents a consistent value in the cooperative scenario with two classes, with an accuracy of 0.96 and an AUC of 1. By using the SPADE Framework, we were able to restrict conversations between agents when ethics were not detected in the dialogue. Random Forest is noteworthy for its ability to achieve 6 correct classifications in 6 interactions in the cooperative 4-class scenario, with only 1 error across all classifications, scenarios, and classes of ethics and morality. In this way, it is possible to contribute to the development of more responsible technologies that are aligned with human values.
Description
Keywords
Ética e Moral Inteligência Artificial Interoperabilidade Machine Learning Sistemas Multiagente