Basit öğe kaydını göster

dc.contributor.authorSarı, Onur
dc.contributor.authorÇelik, Şener
dc.date.accessioned2022-03-09T12:11:14Z
dc.date.available2022-03-09T12:11:14Z
dc.date.issued2021en_US
dc.identifier.citationSarı, O., Çelik, Ş. (2021). Legal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statute. Computer Law & Security Review. DOI: https://doi.org/10.1016/j.clsr.2021.105564en_US
dc.identifier.issn0267-3649
dc.identifier.urihttps://doi.org/10.1016/j.clsr.2021.105564
dc.identifier.urihttps://hdl.handle.net/20.500.12780/476
dc.description.abstractAbstract Artificial intelligence (AI) as of the level of development reached today has become a scientific reality that is subject to study in the fields of law, political science, and other social sciences besides computer and software engineering. AI systems which perform relatively simple tasks in the early stages of the development period are expected to become fully or largely autonomous in the near future. Thanks to this, AI which includes the concepts of machine learning, deep learning, and autonomy, has begun to play an important role in producing and using smart arms. However, questions about AI-Based Lethal Weapon Systems (AILWS) and attacks that can be carried out by such systems have not been fully answered under legal aspect. More particularly, it is a controversial issue who will be responsible for the actions that an AILWS has committed. In this article, we discussed whether AILWS can commit offense in the context of the Rome Statute, examined the applicable law regarding the responsibility of AILWS, and tried to assess whether these systems can be held responsible in the context of international law, crime of aggression, and individual responsibility. It is our finding that international legal rules including the Rome Statute can be applied regarding the responsibility for the act/crime of aggression caused by AILWS. However, no matter how advanced the cognitive capacity of an AI software, it will not be possible to resort to the personal responsibility of this kind of system since it has no legal personality at all. In such a case, responsibility will remain with the actors who design, produce, and use the system. Last but not least, since no AILWS software does have specific codes of conduct that can make legal and ethical reasonings for today, at the end of the study it was recommended that states and non-governmental organizations together with manifacturers should constitute the necessary ethical rules written in software programs to prevent these systems from unlawful acts and to develop mechanisms that would restrain AI from working outside human control.en_US
dc.language.isoengen_US
dc.publisherELSEVIERen_US
dc.relation.isversionofhttps://doi.org/10.1016/j.clsr.2021.105564en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAIen_US
dc.subjectAutonomyen_US
dc.subjectCrime of aggressionen_US
dc.subjectInternational lawen_US
dc.subjectIP lawen_US
dc.subjectIT lawen_US
dc.subjectRome statuteen_US
dc.titleLegal evaluation of the attacks caused by artificial intelligence-based lethal weapon systems within the context of Rome statuteen_US
dc.typearticleen_US
dc.contributor.departmentİstanbul Kent Üniversitesi, Fakülteler, İnsan ve Toplum Bilimleri Fakültesi, İşletme Bölümüen_US
dc.contributor.institutionauthorSarı, Onur
dc.identifier.volume42en_US
dc.relation.journalComputer Law & Security Reviewen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster