top of page

Artificial Intelligence as a Communication Phenomena

Aug 5, 2024

9 min read

0

4

0


Introduction 

In the digital age, public discourse is no longer confined to town halls and newspaper editorials. This public discourse has transformed and reemerged as online forums and social media platforms, where information knows no limits. Artificial intelligence (AI) has had a significant impact on public discourse in a number of ways. How audiences relate to the inclusion of AI relies on the agenda set by media outlets, 

“However, the media do have a crucial role in setting

the agenda for public discourse, and with the proliferation

in personality-driven news, and a decline in newspaper

readership (Hallin, 1992), the agenda is in danger of becoming increasingly trivial” (Bird, 2011, p. 50). 

This being said, the agenda that frames the technological advances and possibilities of AI predicts the degree to which the public is concerned. One of the most notable effects of AI on public discourse is its ability to amplify and accelerate the spread of information. This can be both positive and negative, as it can facilitate the dissemination of valuable information and knowledge, but it can also enable the rapid spread of misinformation and propaganda. Social media platforms like Facebook, Twitter, and YouTube have increasingly utilized AI algorithms to personalize content for users, which can lead to the formation of echo chambers and filter bubbles. These algorithms can also prioritize content that is more likely to generate engagement, which can result in the spread of sensationalized or misleading information. Increasing the desire to engage is a phenomenon that Fortunati elaborates on. In this research, Fortunati found that there is a change in the presence of individuals in social space, stating that in any given place people are only “half-present” (2002, p. 159). In this idea, the body is present but the mind and what it focuses on can be redirected at any moment. 

In addition to affecting the spread of information, AI has also had an impact on the way public discourse is moderated. Many online platforms use AI-powered tools to detect and remove content that violates their policies, such as hate speech, harassment, and extremist content. However, these tools are not always accurate and can result in the removal of content that does not actually violate the platform's policies. AI-powered chatbots have also been used to engage in public discourse, with mixed results. While these chatbots can provide a convenient and efficient way for individuals to access information and services, they can also be used to spread propaganda or manipulate public opinion.

The overall impact of AI on public discourse is complex and multifaceted. While AI has the potential to facilitate the dissemination of valuable information and improve the efficiency of discourse, it also presents significant risks, including the spread of misinformation and the potential for bias and censorship. The introduction of AI to public discourse is as follows: 

“one of the tasks ahead will consist in conceptualizing

a method which makes it possible to incorporate and

preserve qualitative data through a process of quantification,

enabling the researcher to discern the demographic patterning

of viewing responses” (Morley, 1992, p. 28) (Schroder,

1987, p. 27)). 

This information to create a service such as AI questions the possibilities and limitations of the service as a whole. As such, it is important for policymakers, tech companies, and civil society organizations to work together to ensure that AI is developed and utilized in a way that is equitable, ethical, and aligned with societal values. AI ethics and responsibility are critical components of the ongoing debate surrounding artificial intelligence. As AI technologies become increasingly sophisticated and integrated into our daily lives, it is essential to consider the ethical and moral implications of their use.

One of the key ethical issues surrounding AI is the potential for bias and discrimination. Machine learning algorithms are only as objective as the data they are trained on, which means that they can perpetuate existing biases and inequalities in society. This can manifest in a number of ways, such as facial recognition systems that are more accurate for white faces than for faces of color, or hiring algorithms that discriminate against certain groups of people. To tackle this problem, it is crucial to create AI systems that prioritize inclusivity and impartiality, while also taking proactive measures to detect and counteract bias in both data sets and decision-making procedures. 

Another important ethical issue surrounding AI is the potential for harm. AI systems have the potential to cause harm in a number of ways, such as through the spread of misinformation, the perpetuation of harmful stereotypes, or the use of AI in autonomous weapons systems. Sparks discusses the possibilities for harm as emerging from the media, focusing on the introduction of ‘legacy of fear’ (2020, p. 74). This fear, whether it's based on reality or not, can lead to negative attitudes towards the technology and create barriers to its development and adoption. Additionally, the legacy of fear can impact how AI is regulated and governed. Fear can lead to calls for stricter regulations and controls on AI development and deployment, which can slow down progress and limit the potential benefits of AI. On the other hand, fear can also result in a lack of oversight and accountability, as decision-makers may prioritize expediency over ensuring ethical and responsible AI practices.

Responsibility is another important aspect of the AI ethics debate. As AI becomes more integrated into our lives, it is essential to consider who is responsible for the decisions made by AI systems. The responsibility for AI systems can be attributed to multiple parties, including the developers, the operators, and the users of the system. Developers are responsible for ensuring that their AI systems are designed in an ethical and responsible manner and that they are programmed to operate within a framework that prioritizes fairness, transparency, and accountability. Operators are responsible for the ongoing monitoring and maintenance of AI systems to ensure that they continue to function as intended, and users are responsible for understanding the limitations of AI systems and for making informed decisions based on the results produced. The legal framework for determining responsibility for AI-related harm is still evolving, and there is much debate around how liability should be assigned. 


Proposal 

As artificial intelligence (AI) becomes increasingly pervasive in our daily lives, concerns about its ethical implications have grown. From biased algorithms to the potential for AI to replace human decision-making entirely, it is clear that the development and deployment of AI must be guided by a strong ethical framework. In this research proposal, we aim to address this critical issue by exploring the question: How can we develop guidelines for ethical AI development and deployment that are aligned with societal values and ethical principles? By investigating this question, we hope to contribute to the development of a more responsible and equitable approach to AI that serves the needs and interests of all members of society. 


Methods

Introduced by Bird, Ang claims that perceivers of this methodology have a "comfortable assumption that it is the reliability and accuracy of the methodologies being used that will ascertain the validity of the outcomes of the research, thereby reducing the researcher's responsibility to a technical matter" (Bird, 2011 (1996, p. 47)). This being said, the methodologies employed for these research purposes will highlight the responsibility of the researcher as they are connected to the hypotheses. This research will use a mixed-methods approach, including both qualitative and quantitative research methods. The qualitative component will involve a comprehensive literature review of existing research on AI ethics and responsibility, including case studies of ethical issues that have arisen in the development and deployment of AI systems, such as automated cars. The quantitative component will involve a survey of stakeholders involved in AI development and deployment, including policymakers, technology companies, and civil society organizations, to gather their perspectives on the ethical concerns and best practices for ethical AI development and deployment.


Expected Outcomes

The expected outcomes of this research include a set of guidelines for ethical AI development and deployment that are aligned with societal values and ethical principles. These guidelines will be based on the findings from the literature review and survey data, and will be designed to be applicable to a wide range of AI systems and applications. Additionally, this research aims to contribute to the broader conversation around AI ethics and responsibility by highlighting the ethical challenges and potential solutions associated with the development and deployment of AI systems. 


Discussion

Self-driving cars are a technological innovation that has the potential to transform transportation. However, this technology also raises significant ethical and social implications that require careful consideration. The technology that makes self driving cars possible is known as artificial intelligence (AI), and it has the potential to revolutionize the way travel is viewed.

One major ethical challenge of self-driving cars is how to program them to make decisions in complex situations. For instance, how should a self-driving car decide between swerving to avoid a pedestrian and potentially harming its passengers or staying on course and putting the pedestrian's life at risk? Such moral dilemmas underscore the need for ethical guidelines to be developed for the responsible development and deployment of self-driving cars.

Another ethical challenge is accountability. Who should be held responsible if a self-driving car causes an accident? Should it be the car manufacturer, the software developers, or the passengers? In 2018, a tragic accident involving an Uber self-driving car in Tempe, Arizona brought to light the ethical implications of AI technology in self-driving cars. The accident resulted in the death of a pedestrian. The National Transportation Safety Board’s (NTSB) investigation of the accident revealed that the self-driving car's software had detected the pedestrian six seconds before the collision. However, the car's emergency braking system had been disabled, and the car did not take any action to avoid the collision (NTSB, 2019, p. v). This raised questions about the responsibility of the car manufacturer in developing and testing the self-driving car's software to ensure that it was capable of detecting and responding to potential hazards. The safety driver, who was supposed to take over control of the car in an emergency, had been distracted at the time of the accident (NTSB, 2019, p. 63). This raised questions about the responsibilities of safety drivers and their training in monitoring the car's behavior and taking over control when necessary. This accident is just one example of the ethical and social implications of self-driving cars. As AI technology continues to advance and self-driving cars become more prevalent on our roads, it is important to consider the potential risks and benefits and to establish clear guidelines for the responsible development and deployment of this technology.

Beyond ethical considerations, there are also social implications of self-driving cars that must be taken into account. For example, could the widespread adoption of self-driving cars lead to job loss for taxi and truck drivers? How will this technology affect urban planning and design? Will it contribute to greater social isolation? To fully understand the implications of AI technology in self-driving cars, a multidisciplinary approach is necessary that considers not only technical aspects but also the ethical, legal, and social implications. By working together, experts from various fields can ensure that self-driving cars are developed and deployed in a responsible and ethical manner, leading to a safer and more sustainable future.


Conclusion

This proposal aims to examine the complex intersection of AI technology, ethics, and responsibility, with the ultimate goal of developing a set of guidelines that can be used to promote ethical and responsible AI development and deployment. This is critical as AI is rapidly becoming a ubiquitous part of our lives, and has the potential to significantly impact society in both positive and negative ways. The guidelines developed through this research will address a range of issues, including bias and fairness in AI decision-making, transparency in AI algorithms, data privacy and security, and the responsibility of developers and deployers of AI systems. By addressing these issues, we can ensure that AI is developed and deployed in a way that is both ethical and responsible, and that takes into account the potential impact on individuals and society as a whole.

The proposed research will involve a combination of qualitative and quantitative research methods, including interviews with AI developers and users, analysis of AI decision-making processes, and surveys of public attitudes towards AI ethics and responsibility. The findings of this research will be used to inform the development of the guidelines, which will be disseminated to relevant stakeholders in industry, government, and academia. This research seeks to address the critical issue of AI ethics and responsibility by developing guidelines that promote ethical and responsible AI development and deployment. By doing so, we can ensure that AI serves as a force for good in shaping our world, rather than a source of harm and injustice.


















References 

Berger, A. A. (2018). Media and Communication Research Methods (4th ed.). SAGE Publications, Inc.

Bird, S. E. (2011). The Audience in Everyday Life: Living in a Media World. Routledge.

Fortunati, L. (2002). The mobile phone: Towards new categories and social relations. Berg Publishers.

National Transportation Safety Board. (2019). Highway Accident Report: Collision Between an Uber Technologies Inc. Self-Driving Vehicle Prototype and a Pedestrian Crossing Outside of a Marked Crosswalk, Tempe, Arizona, March 18, 2018. Retrieved from https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf

Morley, D. (1992). Television, Audiences and Cultural Studies. Routledge. 

Sparks, G. G. (2020). Media Effects Research: A Basic Overview (5th ed.). Wadsworth Publishing.


Aug 5, 2024

9 min read

0

4

0

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page