Towards a non-anthropocentric philosophical basis for software design
Every revolution in science and technology is usually accompanied by a relevant philosophical innovation. It is worth asking here, in the wake of Althusser: is there a philosophical revolution underway, particularly in the West, capable of addressing the social, ethical, epistemological and ontological dilemmas brought about by this scientific breakthrough? What critical and contributory role can philosophy play, particularly in cognitive science and artificial intelligence? The philosophical foundations that direct the approaches to the role and design of software (including the so-called intelligent artifacts), have, as parti pris, the hypothesis that man is the only thinking and speaking entity facing a world of dumb things (Rees 2022). And as a result of this condition, this exceptionality, man enjoys the privilege of making history, culture and politics. What is the task of philosophy in these “digital times” where artificial intelligence techniques enter our lives in various ways? To think about software. To think innovative philosophical bases that broaden our vision about the design of intelligent software artifacts, in the direction of overcoming the myopia of “technique for technique’s sake”. It is urgent that we overcome tacit and limiting beliefs that hinder the real advance of technology towards a just and sustainable world.
Reparation in artificial intelligence regulation: parameters for an effective and holistic attention to victims
The liability of agents involved in the development of artificial intelligence systems for possible damages caused by these systems is an absolutely relevant topic. However, the other side of this coin, which can often go unnoticed, is the effective reparation to victims who have been proven to have been harmed by these technologies.
UN international guidelines since 2005 point out that a holistic reparation does not only translate into monetary compensation, but in different elements that aim to recompose the victim in different spheres. In this sense, the objective of the present project is to seek the bases for the construction of a reparatory policy aimed specifically at the Brazilian context and that considers its specificities in the field of AI.
To this end, the research will rely on the analysis of national legislation (in progress) around AI and international recommendations around the right to reparation, as well as the search and study of cases of damages already reported caused by these technologies in the country. With the results, it is expected to be possible to contribute to the advancement of the public debate around AI regulation, especially regarding the attention to recognized victims of these systems.
A visual study of urban imaginaries generated by Artificial Intelligence from natural language: DALL‧E 2 and Midjourney
DALL‧E 2 and Midjourney are Artificial Intelligence (AI) programs that are able to generate images from natural language, that is, human language itself. By entering textual descriptions, these systems are able to create sets of images with machine learning, processing the input content and cross-referencing huge databases that shape the words. These tools are already becoming popular among designers, architects and urban planners, and artists for bringing new perspectives and opportunities in the creative process. While such systems benefit the routine of creators, we must be aware of the structures and processes that consolidate such platforms, for they carry within themselves biases that reflect the cruelest aspects of our society. This research aims to make a directed visual study of urban futures from these AI platforms in order to understand their functioning, as well as bring a theoretical critique to the images created. It aims to understand how these AI-generated representations could impact the production of urban space, and in what way one can combat the reproduction of exclusionary and hegemonic urban imaginaries through a dialogue with such platforms.
Use of Artificial Intelligence in the Humanitarian Context: from Surveillance Humanitarianism to Technosolutionism and Technocolonialism
Humanitarian actions try to meet the basic needs of human beings who face serious challenges around the globe due to the social and human rights abyss caused by various unresolved inequalities. In order to materialize the actions and make them more effective, technological tools have been used, which have been refined over time, including artificial intelligence. Such use has had and still has an exponential reach and significance, taking part in an unprecedented revolution. However, despite the perceived benefits, one cannot fail to reflect on the risks involved to the fundamental rights of the vulnerable assisted population, especially the improper use of their biometric data, which contributes to the maintenance of strong colonial relations of dependence.
New Writing Tools – Rethinking Algorithmic Activity through Semiotic Perspectives
João Furio Novaes
First presented in 2011 by the American author Eli Pariser, the concept filter bubble (developed to describe a scenario of informational insulation caused by the constant adoption of personalization algorithms in various types of websites) has assumed over the last decade traces of a given of cyberculture. Already fully refuted, however, by several researches from various fields of knowledge, such term still resists in the vocabulary of journalists, researchers and network commentators as a concrete fact already present in a certain common sense about how communication would be organized in the structures of the current (2022) composition of cyberspace. When observing such works, however – both those apologetic to the term and those that seek to dissolve it – it is, in turn, evident the absence of a semiotic perspective on the subject. In order to avoid that descriptive misconceptions accumulate and further distort the observed phenomenon, this paper advocates that the class of semioticians should take over the studies on cyberspace, considering that such domain – eminently textual – belongs to them.
Artificial Intelligence as a tool for accessibility and inclusion
This paper seeks to investigate the current possibilities of artificial intelligence in the design of interactions and interfaces for adaptations according to different human needs.
It is commonplace to say that our world is rapidly becoming more and more digital. There are no longer any services or products that do not depend on technology to some degree. Proportionally, the number of people using or depending on technological systems has also grown exponentially. To illustrate this scenario, over the past two decades, the proportion of people online in developing countries has increased by about 45% (UN The Age of Digital Interdependence 2018:11).
While an increasingly ubiquitous and ingrained technology in our structures streamlines, cheapens and simplifies processes and operations, on the other hand this brings other complexities that cannot be ignored. One of these complexities lies around issues of accessibility. Digital platforms expand access and democratize opportunities, but they can also replicate the same barriers found in other social spaces. These barriers can have aspects related to gender, abilities, ethnicity, among other characteristics, and end up excluding a whole diversity of people who do not fit the standards of what is considered “normality”.
However, what if we could use data and artificial intelligence to promote and expand inclusion and diversity? Is it possible to use artificial intelligence to produce adaptive digital interfaces that personalize interactions according to the different human needs of users? Linked to these questions, the work also seeks to investigate ethical aspects and guidelines on data care and privacy when it comes to collecting information about users and defining best practices.
Algorithmic coloniality, predictive models and attention economy in sociocultural content distribution platforms: Implications for the imaginary
Maria Aparecida Moura
The project aims to analyze the semiotic, structural and technological articulations of the processes of datafication, prediction and economy of attention in platforms for the distribution of socio-cultural content in order to understand their repercussions on the conformation of the sociocultural imaginary and algorithmic coloniality in national contexts. The triangulation of theories and methods is adopted as a complementary way to establish the possible socio-technical articulations and to guide the construction of the conceptual model. This includes semiotic epistemology, social network analysis (SNA), content analysis, social processes of information organization, and statistical methods.
Analysis of the consequences of bias in decision making within the urban space
Maria do Val da Fonseca
The discourse of Smart Cities provides the implementation of monitoring technologies without the population knowing about the risks, biases and commodification of their flows in the urban space. The bias in decision making within the territory segregates the population, observing the foresight process that artificial technology provides. Given this, with the implementation of surveillance strategies by private actors in public spaces in Brazil, this research seeks to analyze the consequences of the use of artificial intelligence in the national scenario.
To err is human: The paradox of Artificial Intelligence
We believe this to be an important paradox, since it is reasonable to believe in the infallibility of computational algorithms, especially those that will be achieved by the utopian AI, which can be designated as “AIrtificial Super Intelligence” (ASI), or Strong AI, algorithms that would mimic the human being; but by becoming a “perfect imitation of the human being”, it would be subject to the characteristic “to err is human”.
Such designations, which may be markers of a singularity, should be clarified, as we intend to provoke an important set of analysis, which starts from the assumption that ASI will inevitably happen. We argue that such an event will have highly positive as well as potentially disastrous outcomes-something like an extermination event for the human race-depending on how humanity prepares to receive it.
Artificial Intelligence: villain or ally in collective activism aimed at empowerment, inclusion and training of women in technology?
In the last ten years, female communities in technology in Brazil were created having artificial intelligence as the main theme of communications and interactions between leaders and members, held in some of the main digital platforms created by Big Techs.
These collectives have as goals the inclusion, empowerment and training of women in AI-related professional areas. Despite the familiarity with algorithms and programming, it is not known to what extent this activism occurs in an exempt and free way by mediating the exchanges that take place on these platforms, whose algorithms shape suggestions for tastes, consumption. It is also unknown to gauge the level of awareness and actions developed in these collectives to prevent and deal with threats and violence in virtual environments in AI, such as the metaverse.
This paper aims, from questionnaires with two communities and organizational and communicational evaluation of eight others, to assess opportunities and risks to collective activism of women in connected technology from the relationship with AI.
Depression Algorithms: Artificial Intelligence, Mental Health and Society
The rise of neoliberal politics in the Brazilian state has embedded, in its processes of agency, a sense of exclusion associated with new subjectivities for individuals not contemplated in this movement. Information and communication technologies have emerged as an element capable of enabling new strategies of power and sovereignty, where “disposable lives” have come into existence. In this context, those who cannot achieve success and happiness from a mental health point of view have been left with only themselves to blame, instead of stimulating the critical capacity to question the organization of society or the condition of exploitation and injustice in which they live. There are ethical issues to be considered in the analysis of apps and videos available on social networks, addressing issues such as depression and anxiety, to which individuals in psychological distress can resort. It is identified that those who produce content on mental health and those who consume it are victims of an ideology that keeps them trapped in a feedback cycle, within an algorithmic logic that can be quite perverse and reinforcing the stigmas associated with mental disorders. This project aims to critically analyze the most watched YouTube videos and the most downloaded apps by internet users aimed at depression and anxiety, using the concepts of biopolitics and necropolitics as a reference.
Peripheral anthropophagies: Appropriation of algorithms in subjective territorial contexts
Social networks are the new bureaucratic structures that mediate access, agendas, and ways of living. These large companies that monopolize attention, money, and power, spread out in an environment that does not accompany regulation and accountability mechanisms for the socioeconomic, cultural, and health effects of individuals with the same speed. If, in the early days, the network was a place where experimentation and access were more spontaneous, because it was not conditioned to economic models standardized by social networks, and the promise that the network would be a great vehicle of communication and democratization of knowledge, today it is less relevant than the consumerist doctrine – not only of the production surplus, but of information and its elementary form, images -, and more emphatic forms of surveillance, because the cell phone has become for most people a human extension creating habits linked to the Internet social networks 24 hours a day. Nowadays, it is as if the user’s only possibility is to work for the strengthening of these hegemonic multinationals, formally (in quotation marks) or informally, obeying the platforms’ functioning parameters. On the other hand, this project emphasizes and characterizes popular aspects of appropriation of the algorithmic configuration for the creation of alternative realities, for the promotion and affirmation of cultural strength and territorial diversity as a barrier to the spectacular artifices of massification and removal of cultural knowledge and its specificities linked to the territories, the ancestral knowledge that feeds beings not only physically but culturally.