Automated Justice: Social, Ethical and Legal Implications

Background

Over the past decades, autonomous systems (e.g. lethal autonomous weapons systems, systems for data mining and analysis, robotic surgical devices, algorithm-based analytic and predictive software) have gradually been introduced to replace humans in carrying out functions in a number of areas. These areas include the control and security domain (i.e. in the form of predictive policing (Beck & McCue 2009)), automated judicial sentencing systems (Angwin, Larson, Mattu, Kirchner 2016), autonomous weapons systems carrying out combat operations in armed conflicts, etc. These systems’ increasing ability to act on their own with limited human control raises many criminological, legal and ethical concerns (e.g. Caliskan, Bryson, Narayanan 2017). The overall objective of this research is to explore how the enforcement of law, i.e. criminal justice at the national level and enforcement of international law at the international level, is changing with the ever-increasing use of autonomous systems. In so doing we will aim to determine how and why autonomous systems might be useful and beneficial to society on the one hand, and how and why they might represent a risk for human rights and other fundamental values of our societies on the other hand.

The research will focus on areas of international and domestic justice. First, autonomous systems – as an assemblage of artificial intelligence, big data, and algorithms – are being used to manipulate public opinion and the behaviour of populations in ways that may be detrimental for democratic societies. The real power is being transferred from the democratic polis to the digital corporation. Enormous amounts of personal data accumulated by digital companies have already been used to interfere with democratic processes. For instance, the power of Cambridge Analytica, a data company employed by Donald Trump in the 2016 American presidential elections, offers an insight into the possible political power algorithms might acquire in the future: “a Weaponized AI Propaganda Machine […] used to manipulate our opinions and behaviour to advance specific political agendas. [An] invisible machine that preys on the personalities of individual voters to create large shifts in public opinion” (B. Anderson & Horvath 2017). The company gathered 5,000 data points on every US citizen to psychologically profile them, and deliver a highly personalised online advertising campaign. The campaign exploited voters’ characters, fears, and interests, and thus swung the election towards Trump. Despite convincing criticism that the company could not have had such an amount of detailed personal data (Blackie 2017), this may be the future of algorithmic governance and politics. However, cases of interference in the democratic process can be found all over the world. For instance, a clear example of the power of telecommunications data comes from the judgment of the ECtHR in Roman Zakharov v. Russia. The ECtHR case revealed that law enforcement and intelligence agencies in Russia had direct access to all mobile phone data in Russia. This clearly endangers fundamental liberties, as the ECtHR decided in the case. However, the power to intercept, store, and mine such an amount of data on every individual – the penetration of mobile phones is extraordinary in surveillance capitalist societies and the number of mobile phones far exceeds the number of inhabitants – can lead to a distortion in democratic processes, elections, and the system of check and balances. With such data at its disposal, the government could potentially identify and track a particular individual deemed interesting by the government elite. It could conduct more profound in-depth analysis of the public’s mood (“sentiment analysis”) and identify “hotspots”, i.e. opposition leaders and groups disseminating dissent.

The two examples show how the shift in data collection, analysis, and knowledge about society from the public sphere to the private sector can affect governance at the national level and corrupt the democratic architecture of a state. In other words, the substitution of human judgement with artificial intelligence (AI) tools is part of a fundamental shift in the art of governing society (Confessore and Hakim, 2017).

Second, the use of autonomous systems (e.g. predictive and analytic computer programs) led to significant changes in criminal justice systems. In many countries, legislators pursued two sets of objectives as they gradually reshaped their criminal justice systems over the past decades. On the one hand, legislators wanted to render criminal and related procedures quicker, shorter, cheaper, less complicated, and supposedly less burdensome for all stakeholders involved. On the other hand, the changes were not supposed to impair the quality of judicial decision-making, procedural safeguards, fair trial rights, and other rights of the defendant. To reconcile these two seemingly opposing aims, many governments resorted to technology. A number of computer-analytic solutions have found their way into criminal justice systems in order to assist police, prosecutors, judges and other decision-makers with various kinds of judgments they have to make on a daily basis. This trend has been described as the automatization of justice.

In the United States (US), for example, the most recent, fourth-generation of software (e.g. INSLAW, Salient Factor Score, Dangerous Behaviour Rating Scale, PSA, COMPAS, LSI-R) relies on machine learning algorithms that generate results based on vast quantities of data. Such software is usually used in one of the following decision-making phases: decisions on bail, decisions on sentencing, and decisions on parole. It has been determined through case law that such tools do not violate the defendant’s right to due process or overt discrimination. In addition, many argued that such software can evaluate and weigh relevant factors better than humans, for example criminal history, the likelihood of affirmative response to probation or short-term imprisonment, and the character and attitudes indicating that a defendant is unlikely to commit another crime. The use of predictive and analytic computer programs in judicial decision-making is also evolving in Europe. A research from 2016 showed that by using a machine learning method the judicial decisions of the European Court of Human Rights can be predicted with 79% accuracy. With such a high percentage of accuracy indicating that relying on computer programs could enable judicial officials to work more effectively, it is only a matter of time before automatization finds its way into Slovenian criminal procedure.

The debate surrounding the automatization of criminal judicial systems explored both the benefits and dangers of relying on algorithms in judicial decision-making. On the one hand, apologists of automatization argued that computer programs could help create fairer criminal judicial systems that would be based on informed decisions devoid of bias and any kind of subjectivity. Such decisions, which would rely on a sound analysis of predictive factors, could be much more accurate than human decisions. Seen as more objective, algorithms could help regain the long-lost trust of the public in the fairness of criminal justice systems. Moreover, automatization may present an opportunity to purposefully reshape the penal system in order to reflect progressive values and support a more humane outlook. At the very least, they promise quicker and clearer solutions.

On the other hand, critics of automatization pointed out the many dangers that emerged with the introduction of algorithm-based solutions. One of the reasons for opposing the use of automated systems is that they operate in a way that undermines some of the key principles of criminal law and criminal procedure, in particular the principles of individual criminal culpability and presumption of innocence. These principles require criminal justice authorities to gather facts and evidence in order to establish concrete and specific conclusions about a given criminal offence and a given (presumptive) offender. The process of fact- and evidence-gathering is thus focused, always seeking to avoid disproportionate intrusion into individual’s integrity. In addition, criminal-law measures, which can only be applied ex post facto (post delictum), are considered a human-rights intrusion, justified only after an overriding societal interest is demonstrable. The introduction of automated systems in criminal justice systems undermined the traditional framework set within the existing law of human rights. The advent of technology-supported personal data processing gave rise to ubiquitous surveillance practices. Their application within the criminal justice system (e.g. communication surveillance, physical surveillance, new investigation techniques, application of biometrical systems leading to real-time personal-data processing, etc.) is marked by a departure from the traditional post delictum approach. Such practices allow for a dispersed collection of personal data, resulting in large quantities of processed data (big data). Resulting practices within the criminal justice system rely on ante delictum investigations, generalised suspicion, profiling, and predictive policing.

In addition to undermining some of the key principles of criminal law and procedure, automated systems may create other problems. One of them is that building reliable algorithms is a tremendously complex undertaking that requires intense cooperation between legal and computer science professionals. Due to the nature of machine learning there is a need for a large and varied set of training samples upon which the algorithm builds its model. By increasing the complexity of a problem, the number of different factors (e.g. features, attributes) to be taken into account increases significantly, which, in turn, requires a huge number of training samples. This means that both choosing an appropriate set of training samples and selecting the different factors according to which the information is processed, are of tremendous importance. Moreover, there are questions of validity and verification that seem to have been addressed poorly by contemporary users.

Third, automated and autonomous weapons systems changed the way wars are being fought. Over the past years, many states (e.g. US, UK, Israel, Russia, and China) included automated and autonomous weapons systems into their military arsenals. Although both automated and autonomous weapons systems are able to operate without the requirement for a command from a human, the two types of weapons systems differ from each other. On the one hand, lethal automated, or semi-automated, weapons systems can carry out only repetitive and routine operations within the limits of pre-programmed parameters. On the other hand, lethal autonomous weapons systems are fully autonomous, that is, they are able to independently operate in, and adapt to, dynamic environments based on feedback information they receive from a variety of sensors. Such systems are capable of deciding on their own where to go and what to report because they can learn and adapt to new information obtained on the battlefield. Both automated and autonomous weapons systems, which can select and engage targets without intervention by human operators, entailed significant changes in the identity of those who use lethal force and the decision-making process put in place before the release of lethal force.

Goals

By claiming that artificial intelligence, big data, and algorithms may be used to corrupt democratic processes, the first part of our project will investigate what methods, tools and means are already being deployed or may be deployed in the near future to manipulate voters, public opinion and democratic processes. The Facebook experiment with massive-scale emotional contagion of its users (Kramer, Guillory, and Hancock, 2014) clearly demonstrated how today powerful tools exist for inducing or disrupting the spread of ideas. Such tools used for “emotional contagion” may in the future be used to produce “political contagion” amongst the public at large. It is not clear whether countries are using such data to scan the population for such political ends, but they are clearly using social media sentiment analytical tools to curb social unrest and public disorder. Existing research shows (e.g. Caplan, Reed, 2016), secondly, that we cannot trust platform owners, such as Facebook or Twitter, to make decisions that would serve the best interest of the collective. Platform owners may alter algorithms to fit their own ideological agenda, as the Facebook Experiments with nudging users to the polls showed in 2014 (Zittrain, 2014). Thirdly, internet search rankings can influence more than click-rates. The studies in the United States, New Zealand, and India found that biased search rankings can shift the voting preferences of undecided voters by 20 percent – and up to 80 percent in some demographic groups – and that search engine manipulation often takes place without people knowing it’s happening (Epstein, Robertson, 2014). This part will thus dig further into existing experiments in nudging consumers and/or voters into specific direction: can democracy, the rule of law and the principle of division of powers survive big data and artificial intelligence?

Second, with regard to the automatization of criminal justice systems, we have identified, for the purpose of this research, two key problems. The first problem concerns the fact that the application of automated justice is in stark contrast with the current principles of criminal law and procedure. Contemporary criminal law and procedure rest, inter alia, on the principles of individual criminal culpability, the individualisation of criminal sanctions, the presumption of innocence and – in traditional European criminal justice systems – the fact-finding principle. Insofar as automated justice relies on mass collection and processing of personal and other data (e.g. communication traffic data retention, DNA and fingerprint data retention, IMSI catchers, sentencing guidelines), it leads to a dispersed, non-individualised fact-finding, to generalised suspicion and guilt-by-association, barring individualised fact-finding related to questions of guilt and criminal sanctions. The second problem that we would like to explore is whether algorithm-driven solutions routinely used in the US and some other common-law systems could be introduced in the EU/Slovenian and other civil-law jurisdictions. In the US, analytic software assists mostly in three types of decision-making: decisions on bail, decisions on sentencing, and decisions on parole. US courts heavily rely on such software as it is allegedly better at predicting future outcomes (e.g. reoffending) and assessing some crucial factors (e.g. dangerousness of a defendant or a convict). Despite the potential benefits that analytic software could bring to European and other civil-law jurisdictions, the proposals to introduce such tools have been received with great scepticism. In our research, we will focus on the legal problems stemming from the application of algorithmic solutions to criminal justice systems, particularly when they are transplanted into a civil-law jurisdiction such as the Slovenian one.

Third, with regard to the use of lethal automated/autonomous weapons systems in armed conflicts, we have identified the following problems. First, by carrying out critical functions in combat (e.g. identification and selection of targets, application of force to targets), such weapons systems reduced the role of human operators. By allowing “killer robots” that lack human judgment and the ability to understand context to make life and death decisions, a major moral line was crossed. Second, these weapons systems have many limitations in making complex decisions (e.g. lack of understanding of the larger picture in an armed conflict, lack of common sense, inability to distinguish between legal and illegal orders) that make them, for now, incapable of satisfying the requirements of international humanitarian law. Third, the use of such weapons systems raises concerns about accountability for violations of human rights and international humanitarian law. Having no moral agency, autonomous weapons systems cannot be held responsible if they cause, for example, civilian casualties.

Literature

  • Abramowitz, M. J. (2017, december 11.). Opinion | Stop the Manipulation of Democracy Online. The New York Times. From https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html.
  • Alteras, N., Tsarapatsanis, D., Peitro, D., Lampos V. (2016). “Predicting judicial decisions of the European Court of Human Rights: A Natural Language Processing perspective.” Peerj Computer Science 2.
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, maj 23.). Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica.
  • Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.
  • Asaro, P. (2012). “On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making.” International Review of the Red Cross94(886), 687-709.
  • Barocas, S. and Selbst, A. D. (2016). Big Data’s Disparate Impact. California Law Review104, 671–723.
  • Beard, J. M. (2013). “Autonomous weapons and human responsibilities.” Georgetown Journal of International Law, 45, 617-681.
  • Berk, R. A. and Bleich, J. (2013). Statistical Procedures for Forecasting Criminal Behavior. Criminology & Public Policy 12, 513–544.
  • Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61-million-person experiment in social influence and political mobilization. Nature489(7415), 295.
  • Boyd,   Danah. (2017, november 20.). The Radicalization of Utopian Dreams. Data & Society: Points. Pridobljeno od https://points.datasociety.net/the-radicalization-of-utopian-dreams-e1b785a0cb5d.
  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science356(6334), 183–186.
  • Caplan, R., & Reed, L. (2016, May 16). Who Controls the Public Sphere in an Era of Algorithms: Case Studies. Data & Society.
  • Citron, D. K. (2008). “Technological Due Process.” Washington University Law Review, 85(6), 1249-1313.
  • Crespo A. (2016). Systemic Facts: Toward Institutional Awareness in Criminal Courts. Harvard Law Review, 129 (8), 2049-2117.
  • Denham, E. (2017, december 13.). Update on ICO investigation into data analytics for political purposes. ICO Blog.
  • Epstein, Robert (2015, August 19). “How Google Could Rig the 2016 Election.” Politco.com.
  • Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press.
  • Ferguson, A. G. (2015). Big Data and Predictive Reasonable Suspicion. University of Pennsylvania Law Review163(2), 327–410.
  • Forbrig, J. (2017, March 8.). Russian Hackers Can’t Beat German Democracy. Foreign Policy. From https://foreignpolicy.com/2017/08/03/russian-hackers-cant-beat-german-democracy-putin-merkel/.
  • Forelle, M., Howard, P., Monroy-Hernandez, A., and Savage, S. (2015). “Political bots and the Manipulation of Public Opinion in Venezuela.”
  • Goodman, B. and Flaxman, S. (2016). “European Union regulations on algorithmic decision-making and a ‘right to explanation’”. Paper presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York.
  • Grimal, F. (2014). “Missile Defence Shields: Automated and Anticipatory Self-Defence? 19 Journal of Conflict and Security Law, 2, 317–339.
  • Harcourt, B. E. (2015). Risk as a proxy for race. Federal Sentencing Reporter27(4), 237–243.
  • Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., idr. (2017, Feb 25). Will democracy survive big data and artificial intelligence? Scientific American.
  • Hosenball, M. (2017, September 5.). U.S. increasingly convinced that Russia hacked French election: sources. Reuters. From https://www.reuters.com/article/us-france-election-russia/u-s-increasingly-convinced-that-russia-hacked-french-election-sources-idUSKBN1852KO.
  • Human Rights Watch (HRW). (2012). Loosing humanity: The case against killer robots. New York: HRW.
  • Kastan, B. (2013). “Autonomous Weapons Systems: A Coming Legal Singularity.” University of Illinois Journal of Law, Technology, and Policy, 45-82.
  • Kehl, D., Guo P., Kessler, S. (2017). “Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessment in Sentencing.” Responsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard Law School.
  • Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., and Mullainathan, S. (2017). Human Decisions and Machine Predictions (Working Paper No. 23180) (pp. 1–76). Cambridge, MA: National Bureau of Economic Research.
  • Krishnan, A. (2009). Killer robots: legality and ethicality of autonomous weapons. Farnham: Ashgate Publishing. Levin B. (2016). “Values and Assumptions in Criminal Adjudication,” Harvard Public Law Working Paper No. 16-41, Harvard Law Review Forum.
  • Mallet, S. (2015). “Judicial Discretion in Sentencing: A Justice System that is no longer just?” Victoria University of Wellington Law Review 46, 533-571.
  • Marchant, G. E., Allenby, B., Arkin, R. C., Barrett, E. T., Borenstein, J., Gaudet, L. M. and Silberman, J. (2011). “International governance of autonomous military robots.” The Columbia Science and Technology Law Review, 12, 272-315.
  • Marks A., Bowling B. and Keenan C. (2017).      “Automatic justice? Technology, Crime and Social Control,” in: R.
  • Brownsword,  E.  Scotford  and  K.  Yeung  (eds):  The  Oxford  Handbook  of  the  Law     and  Regulation  of Technology, Oxford: OUP
  • McCulloch, J., & Wilson, D. (2016). Pre-crime: Pre-emption, precaution and the future. London: Routledge.
  • Molenaar et al. (2000). “Satellite-based vessel monitoring systems (VMSs) for fisheries management”, FAO Legal Papers #7, 1-45.
  • Morozov, E. (2013). To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist. London: Allen Lane.
  • Moses, L. B., & Chan, J. (2014). Using big data for legal and law enforcement decisions: Testing the new tools. University of New South Wales Law Journal, 37(2), 643–678.
  • Nathan, J. (2015). Risk and Needs Assessment in the Criminal Justice System. Washington D.C.: Congressional Research Services.
  • O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
  • O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Random House.
  • Plesničar, M. M., and Šugman Stubbs, K. (2018). “Subjectivity, algorithms and the courtroom.” In A. Završnik (Ed.), Big data, crime and social control (pp. 154–175). New York: Routledge.
  • Simmons, R. (2016). Quantifying Criminal Procedure: How to Unlock the Potential of Big Data in Our Criminal Justice System. Michigan State Law Review, 2016 (4), 947–1017.
  • Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the 21st century. New York: Penguin.
  • Sparrow, R. (2007). “Killer robots” Journal of Applied Philosophy, 24 (1), 62-77.
  • Starr, S. (2014). “Evidence-Based Sentencing and the Scientific Rationalization of Discrimination.” Stanford Law Review 66 (4), 803-872.
  • Steele, V. R., Claus, E. D., Aharoni, E., Vincent, G. M., Calhoun, V. D., and Kiehl, K. A. (2015). “Multimodal imaging measures predict rearrest.” Frontiers in Human Neuroscience, 9(425) 1–13.
  • Yang, C. S. (2013). “Free at last? Judicial Discretion and Racial Disparities in Federal Sentencing.” Coase-Sandor Institute for Law & Economics Working Paper No. 661, 2013.
  • Završnik, A., ed. (2018). Big data, crime and social control, New York: Routledge.
  • Zittrain, J. (2014, junij 2.). Facebook Could Decide an Election Without Anyone Ever Finding Out. The New Republic.

 

[/fusion_text][fusion_text columns=”” column_min_width=”” column_spacing=”” rule_style=”default” rule_size=”” rule_color=”” hide_on_mobile=”small-visibility,medium-visibility,large-visibility” class=”” id=”” animation_type=”” animation_direction=”left” animation_speed=”0.3″ animation_offset=””]Scientific publications

ZAVRŠNIK, Aleš. Algorithmic Justice: Algorithms and Big Data in Criminal Justice Settings. European Journal of Criminology, from 18.9. 2019, https://doi.org/10.1177/1477370819876762

ZAVRŠNIK, Aleš (ed.). Big Data, Crime and Social Control, Routledge, 2018, https://www.routledge.com/Big-Data-Crime-and-Social-Control-1st-Edition/Zavrsnik/p/book/9781138227453

ZAVRŠNIK, Aleš, KRIŽNAR, Primož. Legal standards of location privacy in light of the mosaic theory. In: NEWELL, Bryce Clayton (ed.), TIMAN, Tjerk (ed.), KOOPS, Bert-Jaap (ed.). Surveillance, privacy and public space, (Routledge studies in surveillance, 2). Abingdon; New York: Routledge. 2018, pp. 199-220, https://www.routledge.com/Surveillance-Privacy-and-Public-Space-1st-Edition/Newell-Timan-Koops/p/book/9781138709966

ZAVRŠNIK, Aleš. Vednost in politika družbenega nadzorstva. Revija za kriminalistiko in kriminologijo. apr.-jun. 2019, letn. 70, št. 2, str. 88-101. ISSN 0034-690X. https://www.policija.si/images/stories/Publikacije/RKK/PDF/2019/RKK_2-2019.pdf.

Short contributions

ZAVRŠNIK, Aleš. Algoritmično pravosodje. Pravna praksa, 7. jun. 2018, leto 37, št. 22, str. 21

ZAVRŠNIK, Aleš. Empatična tehnologija in vladavina algoritmov. Pravna praksa, 19. jul. 2018, leto 37, št. 28/29, str. 28

ZAVRŠNIK, Aleš. Pravna subjektiviteta umetne inteligence. Pravna praksa, 8. nov. 2018, leto 37, št. 43, str. 20

ZAVRŠNIK, Aleš. Lažnivčeva dividenda in globinski ponaredki. Pravna praksa, 20. dec. 2018, leto 37, št. 49/50, str. 25

Interviews

ZAVRŠNIK, Aleš (avtor, intervjuvanec). Tehnologija ne bo rešila družbenih problemov. Ljubljana: Radiotelevizija Slovenija javni zavod, Val 202, 2019. Nedeljski gost. https://val202.rtvslo.si/2019/06/nedeljski-gost-172/.

MARTIČ, Zvezdan (oseba, ki intervjuva), RIBIČIČ, Ciril (intervjuvanec), BRATKO, Ivan (intervjuvanec), ZAVRŠNIK, Aleš (intervjuvanec). Nevarnost umetne inteligence. Ljubljana: Radiotelevizija Slovenija javni zavod, 2018. Akcènt. https://4d.rtvslo.si/arhiv/akcent/174567457.

SALECL, Renata (intervjuvanec), ZAVRŠNIK, Aleš (intervjuvanec), VRABEC U., Helena (intervjuvanec). Umetna inteligenca se že odloča namesto nas. Ljubljana: Radiotelevizija Slovenija javni zavod, Prvi, 2018. Intelekta. https://4d.rtvslo.si/arhiv/intelekta/174547384.

Lectures abroad

ZAVRŠNIK, Aleš. A.I. and criminal justice : prispevek na Study Session on ʺArtificial Intelligence and Judicial Systemsʺ at the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe (CoE), Strasbourg, 27 June 2018.

ZAVRŠNIK, Aleš. Algorithms and criminal justice : predavanje na La Faculté de droit, de sciences politiques et de gestion, Université de Strasbourg, Strasbourg, 29. January 2019

ZAVRŠNIK, Aleš. ʺAlgorithmic justiceʺ – the use of algorithms and machine-learning in criminal justice systems : prispevek na Artificial intelligence (AI) and human rights: Legal challenges, ERA, Academy of European Law, Brussels, 15-16 April 2019.

ZAVRŠNIK, Aleš. Algorithmic justice : algorithms, big data, and machine learning in crime control : predavanje na Katedry Prawa Karnego i Kryminologii, Lublinie, 30. maja 2019.

Lectures in Slovenia

ZAVRŠNIK, Aleš. ʺAutomated Justiceʺ : implications for human rights : prispevek na konferenci Social harm in a digitalized global world: technologies of power and normalizen practices of contemporary society, European group for the study of deviance and social control conference, 22-24 Avgust 2018, Ljubljana.

ZAVRŠNIK, Aleš. Umetna inteligenca – implikacije za človekove pravice : predavanje na jesenski šoli Pravo pred izzivi digitalne (r)evolucije, Pravna fakulteta UL, 25. do 27.september 2019, Ljubljana.

ZAVRŠNIK, Aleš. Izbrane dileme ʺalgoritmičneʺ pravičnosti : prispevek na Simpoziju za pravno in socialno filozofijo 2018, Pravna fakulteta, Ljubljana, 5. okt. 2018.

ZAVRŠNIK, Aleš. Umetna inteligenca in kazenska odgovornost : prispevek na 45. dnevih slovenskih pravnikov, Portorož, GH Bernardin, 10. do 12. oktober 2019.

ZAVRŠNIK, Aleš. Pogled Sveta Evrope na uporabo umetne inteligence v kazenskem pravosodju : prispevek na 11. konferenci kazenskega prava in kriminologije, 4. in 5. december 2018, GH Bernardin, Portorož.

ZAVRŠNIK, Aleš. Avtomatizacija pravičnosti = Automating justice : prispevek na Festival Grounded ʺAvtomatizacija in oblastʺ, Ljubljana, 14.11.-16.11.2019.

ZAVRŠNIK, Aleš. Avtonomna vozila in kazenska odgovornost : prispevek na 12. konferenci kazenskega prava in kriminologije, Portorož, Grand Hotel Bernardin, 3. in 4. december 2019.

ZAVRŠNIK, Aleš. Poglej naprej – sodstvo in avtomatizacija : trendi in pasti : prispevek na 10. konferenci dobrih praks v sodstvu, v organizaciji Vrhovnega sodišča RS, 12. december 2019, Kongresni center Brdo pri Kranju.

Project team:

Badalič Vasja
Bizilj Barbara
Gorkič Primož
Hafner Miha
Križnar Primož
Mihelj Plesničar Mojca
Salecl Renata
Selinšek Liljana
Šarf Pika

Funders:
Slovenian Research Agency  

Project number: J5-9347

1.7.2018 – 30.06.2021

Accessibility