Zobrazit minimální záznam

Režimy selhání velkých jazykových modelů
dc.contributor.advisorŠpelda, Petr
dc.creatorMilová, Soňa
dc.date.accessioned2023-07-24T22:55:05Z
dc.date.available2023-07-24T22:55:05Z
dc.date.issued2023
dc.identifier.urihttp://hdl.handle.net/20.500.11956/182874
dc.description.abstractFailure Modes of Large Language Models Soňa Milová Abstract Diploma thesis "The failure modes of Large Language Models" focuses on addressing failure modes of Large Language Models (LLMs) from the ethical, moral and security point of view. The method of the empirical analysis is document analysis that defines the existing study, and the process by which failure modes are selected from it and analysed further. It looks closely at OpenAI's Generative Pre-trained Transformer 3 (GPT-3) and its improved successor Instruct Generative Pre-trained Transformer (IGPT). The thesis initially investigates model bias, privacy violations and fake news as the main failure modes of GPT-3. Consequently, it utilizes the concept of technological determinism as an ideology to evaluate whether IGPT has been effectively designed to address all the aforementioned concerns. The core argument of the thesis is that the utopic and dystopic view of technological determinism need to be combined with the additional aspect of human control. LLMs are in need of human involvement to help machines better understand context, mitigate failure modes, and of course, to ground them in reality. Therefore, contextualist view is portrayed as the most accurate lens through which to look at LLMs as it argues they depend on the responsibilities,...en_US
dc.languageEnglishcs_CZ
dc.language.isoen_US
dc.publisherUniverzita Karlova, Fakulta sociálních vědcs_CZ
dc.subjectLarge Language Modelsen_US
dc.subjectGenerative Pre-trained Transformer 3en_US
dc.subjectInstruct Generative Pre-trained Transformeren_US
dc.subjectArtificial Intelligence ethicsen_US
dc.subjectLarge Language Modelscs_CZ
dc.subjectGenerative Pre-trained Transformer 3cs_CZ
dc.subjectInstruct Generative Pre-trained Transformercs_CZ
dc.subjectArtificial Intelligence ethicscs_CZ
dc.titleFailure Modes of Large Language Modelsen_US
dc.typediplomová prácecs_CZ
dcterms.created2023
dcterms.dateAccepted2023-06-22
dc.description.departmentKatedra bezpečnostních studiícs_CZ
dc.description.departmentDepartment of Security Studiesen_US
dc.description.facultyFaculty of Social Sciencesen_US
dc.description.facultyFakulta sociálních vědcs_CZ
dc.identifier.repId254152
dc.title.translatedRežimy selhání velkých jazykových modelůcs_CZ
dc.contributor.refereeStřítecký, Vít
thesis.degree.nameMgr.
thesis.degree.levelnavazující magisterskécs_CZ
thesis.degree.disciplineMezinárodní bezpečnostní studiacs_CZ
thesis.degree.disciplineInternational Security Studiesen_US
thesis.degree.programPolitologiecs_CZ
thesis.degree.programPolitical Scienceen_US
uk.thesis.typediplomová prácecs_CZ
uk.taxonomy.organization-csFakulta sociálních věd::Katedra bezpečnostních studiícs_CZ
uk.taxonomy.organization-enFaculty of Social Sciences::Department of Security Studiesen_US
uk.faculty-name.csFakulta sociálních vědcs_CZ
uk.faculty-name.enFaculty of Social Sciencesen_US
uk.faculty-abbr.csFSVcs_CZ
uk.degree-discipline.csMezinárodní bezpečnostní studiacs_CZ
uk.degree-discipline.enInternational Security Studiesen_US
uk.degree-program.csPolitologiecs_CZ
uk.degree-program.enPolitical Scienceen_US
thesis.grade.csVýborněcs_CZ
thesis.grade.enExcellenten_US
uk.abstract.enFailure Modes of Large Language Models Soňa Milová Abstract Diploma thesis "The failure modes of Large Language Models" focuses on addressing failure modes of Large Language Models (LLMs) from the ethical, moral and security point of view. The method of the empirical analysis is document analysis that defines the existing study, and the process by which failure modes are selected from it and analysed further. It looks closely at OpenAI's Generative Pre-trained Transformer 3 (GPT-3) and its improved successor Instruct Generative Pre-trained Transformer (IGPT). The thesis initially investigates model bias, privacy violations and fake news as the main failure modes of GPT-3. Consequently, it utilizes the concept of technological determinism as an ideology to evaluate whether IGPT has been effectively designed to address all the aforementioned concerns. The core argument of the thesis is that the utopic and dystopic view of technological determinism need to be combined with the additional aspect of human control. LLMs are in need of human involvement to help machines better understand context, mitigate failure modes, and of course, to ground them in reality. Therefore, contextualist view is portrayed as the most accurate lens through which to look at LLMs as it argues they depend on the responsibilities,...en_US
uk.file-availabilityV
uk.grantorUniverzita Karlova, Fakulta sociálních věd, Katedra bezpečnostních studiícs_CZ
thesis.grade.codeA
uk.publication-placePrahacs_CZ
uk.thesis.defenceStatusO


Soubory tohoto záznamu

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

Tento záznam se objevuje v následujících sbírkách

Zobrazit minimální záznam


© 2017 Univerzita Karlova, Ústřední knihovna, Ovocný trh 560/5, 116 36 Praha 1; email: admin-repozitar [at] cuni.cz

Za dodržení všech ustanovení autorského zákona jsou zodpovědné jednotlivé složky Univerzity Karlovy. / Each constituent part of Charles University is responsible for adherence to all provisions of the copyright law.

Upozornění / Notice: Získané informace nemohou být použity k výdělečným účelům nebo vydávány za studijní, vědeckou nebo jinou tvůrčí činnost jiné osoby než autora. / Any retrieved information shall not be used for any commercial purposes or claimed as results of studying, scientific or any other creative activities of any person other than the author.

DSpace software copyright © 2002-2015  DuraSpace
Theme by 
@mire NV