dc.contributor.advisor | Špelda, Petr | |
dc.creator | Milová, Soňa | |
dc.date.accessioned | 2023-07-24T22:55:05Z | |
dc.date.available | 2023-07-24T22:55:05Z | |
dc.date.issued | 2023 | |
dc.identifier.uri | http://hdl.handle.net/20.500.11956/182874 | |
dc.description.abstract | Failure Modes of Large Language Models Soňa Milová Abstract Diploma thesis "The failure modes of Large Language Models" focuses on addressing failure modes of Large Language Models (LLMs) from the ethical, moral and security point of view. The method of the empirical analysis is document analysis that defines the existing study, and the process by which failure modes are selected from it and analysed further. It looks closely at OpenAI's Generative Pre-trained Transformer 3 (GPT-3) and its improved successor Instruct Generative Pre-trained Transformer (IGPT). The thesis initially investigates model bias, privacy violations and fake news as the main failure modes of GPT-3. Consequently, it utilizes the concept of technological determinism as an ideology to evaluate whether IGPT has been effectively designed to address all the aforementioned concerns. The core argument of the thesis is that the utopic and dystopic view of technological determinism need to be combined with the additional aspect of human control. LLMs are in need of human involvement to help machines better understand context, mitigate failure modes, and of course, to ground them in reality. Therefore, contextualist view is portrayed as the most accurate lens through which to look at LLMs as it argues they depend on the responsibilities,... | en_US |
dc.language | English | cs_CZ |
dc.language.iso | en_US | |
dc.publisher | Univerzita Karlova, Fakulta sociálních věd | cs_CZ |
dc.subject | Large Language Models | en_US |
dc.subject | Generative Pre-trained Transformer 3 | en_US |
dc.subject | Instruct Generative Pre-trained Transformer | en_US |
dc.subject | Artificial Intelligence ethics | en_US |
dc.subject | Large Language Models | cs_CZ |
dc.subject | Generative Pre-trained Transformer 3 | cs_CZ |
dc.subject | Instruct Generative Pre-trained Transformer | cs_CZ |
dc.subject | Artificial Intelligence ethics | cs_CZ |
dc.title | Failure Modes of Large Language Models | en_US |
dc.type | diplomová práce | cs_CZ |
dcterms.created | 2023 | |
dcterms.dateAccepted | 2023-06-22 | |
dc.description.department | Katedra bezpečnostních studií | cs_CZ |
dc.description.department | Department of Security Studies | en_US |
dc.description.faculty | Faculty of Social Sciences | en_US |
dc.description.faculty | Fakulta sociálních věd | cs_CZ |
dc.identifier.repId | 254152 | |
dc.title.translated | Režimy selhání velkých jazykových modelů | cs_CZ |
dc.contributor.referee | Střítecký, Vít | |
thesis.degree.name | Mgr. | |
thesis.degree.level | navazující magisterské | cs_CZ |
thesis.degree.discipline | Mezinárodní bezpečnostní studia | cs_CZ |
thesis.degree.discipline | International Security Studies | en_US |
thesis.degree.program | Politologie | cs_CZ |
thesis.degree.program | Political Science | en_US |
uk.thesis.type | diplomová práce | cs_CZ |
uk.taxonomy.organization-cs | Fakulta sociálních věd::Katedra bezpečnostních studií | cs_CZ |
uk.taxonomy.organization-en | Faculty of Social Sciences::Department of Security Studies | en_US |
uk.faculty-name.cs | Fakulta sociálních věd | cs_CZ |
uk.faculty-name.en | Faculty of Social Sciences | en_US |
uk.faculty-abbr.cs | FSV | cs_CZ |
uk.degree-discipline.cs | Mezinárodní bezpečnostní studia | cs_CZ |
uk.degree-discipline.en | International Security Studies | en_US |
uk.degree-program.cs | Politologie | cs_CZ |
uk.degree-program.en | Political Science | en_US |
thesis.grade.cs | Výborně | cs_CZ |
thesis.grade.en | Excellent | en_US |
uk.abstract.en | Failure Modes of Large Language Models Soňa Milová Abstract Diploma thesis "The failure modes of Large Language Models" focuses on addressing failure modes of Large Language Models (LLMs) from the ethical, moral and security point of view. The method of the empirical analysis is document analysis that defines the existing study, and the process by which failure modes are selected from it and analysed further. It looks closely at OpenAI's Generative Pre-trained Transformer 3 (GPT-3) and its improved successor Instruct Generative Pre-trained Transformer (IGPT). The thesis initially investigates model bias, privacy violations and fake news as the main failure modes of GPT-3. Consequently, it utilizes the concept of technological determinism as an ideology to evaluate whether IGPT has been effectively designed to address all the aforementioned concerns. The core argument of the thesis is that the utopic and dystopic view of technological determinism need to be combined with the additional aspect of human control. LLMs are in need of human involvement to help machines better understand context, mitigate failure modes, and of course, to ground them in reality. Therefore, contextualist view is portrayed as the most accurate lens through which to look at LLMs as it argues they depend on the responsibilities,... | en_US |
uk.file-availability | V | |
uk.grantor | Univerzita Karlova, Fakulta sociálních věd, Katedra bezpečnostních studií | cs_CZ |
thesis.grade.code | A | |
uk.publication-place | Praha | cs_CZ |
uk.thesis.defenceStatus | O | |