“FAKE IT UNTIL YOU MAKE IT” #NOT
CoCoP: COMMUNICATING COHESION POLICY
Written for JESPIONNE
Olivia Wane Fitzgerald
Since 1984, when the then-new movie of “Terminator” was released in the big screen, no one could ever imagine that this could be something more than just science fiction. Today evil machines are operating to cause
tremendous problems for people, governments, and even whole political systems. With the passage of time, we experience substantial technological developments, making fake news a cup of tea for the malicious experts that aim to manipulate the public opinion.
Cyber and hybrid war will increase more within the next ten years. I am quite surprised that people have created artificial intelligence to hurt rather than serve the common good.
Artificial intelligence through bots and other cyber malware may be exploited by authoritarian regimes, terrorists, hackers, and other extremist groups. A recent report by 26 experts warns us of the dangers around artificial intelligence, but I want to focus more on the fake news by which we are bombed every day! Cyber and hybrid war, the report suggests, will increase more within the next ten years. I am quite surprised that people have created artificial intelligence to hurt rather than serve the common good.
Artificial intelligence may take a variety of forms, but fake news is there to create propaganda and lead to mis- or disinformation. Let me driven by my conspiracy theories and refer, for example, to the election of Donald Trump in the US. The report suggests that there are incredibly new manipulation tools of public opinion in comparison with the past, while artificial intelligence breaches personal data of ordinary citizens like you and me. Imagine like you are spied 24/7. Oh, it sounds horrible.
With that said, it does not come out of anywhere that global firms and governments are required to take action and understand the danger we are put in. However, the report acknowledges the importance of artificial intelligence, but coins have two sides, and thus it depends on the person that uses them whether the results will be positive or negative or even disastrous. A survey of 2016 showed that around 5 out of 10 EU citizens follow the news on social media. So, imagine if the news is misleading or false, what results could this have.
Well, we already experience some. The Future of Humanity Institute of Oxford University and other labs and foundations predict an increase in cyber attacks, spam emails, and fake news, and thus new regulations, further collaboration between policy-makers and bureaucrats as well as the engagement of the civil society are necessary measures that need to be forwarded in the light of the emerging hybrid war.
I emphasize more in civil society, which I believe can contribute the most. Regarding disinformation, there are things for your “to do” list when following the news. Check the source, the author and the date of the news release. Identify potential hyperlinks and the websites that they guide you. Do consider whether this is a fun page or a satyr. If you find difficulty in understanding the content and the reason for the article or the report, ask the experts (librarians, professors, etc.). On top of all these, always use your critical thinking. What is the whole text talking about and what is it trying to say? This is a very crucial process, so try to get rid of your shoes and wear your objective hat to assess the story you are told.
If this sounds complicated, why not checking this: https://www.factcheck.org/ask-factcheck/ask-us-a-question/ ? This is a platform for our US friends giving them the opportunity to contact this nonpartisan, nonprofit organization and ask for further information for a specific piece of content. Unfortunately, the EU has not yet created such a tool. So, my EU friends can send a mail with their proposals to the DG COMM directly (firstname.lastname@example.org) or take advantage of the European Citizens’ Initiative in an attempt to collect signatures to promote your proposal and lead the discussions in the EU bubble. Don’t miss this!
By By MILES BRUNDAGE for MUAI
Artificial intelligence (AI) and machine learning (ML) have progressed rapidly in recent years, and their development has enabled a wide range of beneficial applications. For example, AI is a critical component of widely used technologies such as automatic speech recognition, machine translation, spam lters, and search engines. Additional promising technologies currently being researched or undergoing small-scale pilots include driverless cars, digital assistants for nurses and doctors, and AI-enabled drones for expediting disaster relief operations. Even further in the future, advanced AI holds out the promise of reducing the need for unwanted labor, greatly expediting scientific research, and improving the quality of governance. We are excited about many of these developments, though we also urge attention to the ways in which AI can be used maliciously . We analyze such risks in detail so that they can be prevented or mitigated, not just for the value of preventing the associated harms, but also to prevent delays in the realization of the beneficial applications of AI. Artificial intelligence (AI) and machine learning (ML) are altering the landscape of security risks for citizens, organizations, and states. Malicious use of AI could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponing consumer drones), and political security (e.g. through privacy-eliminating surveillance, pro ling, and repression, or through automated and targeted disinformation campaigns).
The malicious use of AI will impact how we construct and manage our digital infrastructure as well as how we design and distribute AI systems, and will likely require policy and other institutional responses. The question this report hopes to answer is: how can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI? We convened a workshop at the University of Oxford on the topic in February 2017, bringing together experts on AI safety, drones , cybersecurity, lethal autonomous weapon systems, and counterterrorism . This document summarizes the findings of that workshop and our conclusions after subsequent research. For the purposes of this report, we only consider AI technologies that are currently available (at least as initial research and development demonstrations) or are plausible in the next 5years, and focus in particular on technologies leveraging machine learning. We only consider scenarios where an individual or an organization deploys AI technology or compromises an AI system with an aim to undermine the security of another individual, organization or collective. Our work fits into a larger body of work on the social implications of, and policy responses to, AI . There has thus far been more attention paid in this work to unintentional forms of AI misuse such as algorithmic bias , versus the intentional undermining of individual or group security that we consider. READ MORE >>
We exclude indirect threats to security from the current report, such as threats that could come from mass unemployment, or another second- or third-order effects from the deployment ofAI technology in human society. We also exclude system-level threats that would come from the dynamic interaction between non-malicious actors, such as a “race to the bottom” on AI safety between competing groups seeking an advantage or conflicts spiraling out of control due to the use of ever-faster autonomous weapons. Such threats are real, important, and urgent, and require further study, but are beyond the scope of this document. Though the threat of malicious use of AI has been highlighted in high-pro le settings (e.g., in a Congressional hearing a White House-organized workshop, and a Department of Homeland Security report ),
and particular risk scenarios have been analyzed (e.g., the subversion of military lethal autonomous weapon systems ), the intersection of AI and malicious intent writ large has not yet been analyzed comprehensively. Several kinds of literatures bear on the question of AI and security, including those on cybersecurity, drones, lethal autonomous weapons, “social media bots,” and terrorism. Another adjacent area of research is AI safety—the effort to ensure that AI systems reliably achieve the goals their designers and users intend without causing unintended harm. Whereas the AI safety literature focuses on unintentional injuries related to AI, we focus on the intentional use of AI to achieve harmful outcomes (from the victim’s point of view). A recent report covers similar ground to our analysis, with a greater focus on the implications of AI for U.S. national security.
In the remainder of the report, we first provide a high-level view on the nature of AI and its security implications in the section General Framework for AI and Security, with subsections on Capabilities, Security-relevant Properties of AI, and General Implications for the Security Landscape; we then illustrate these characteristics of AI with Scenarios in which AI systems could be used maliciously; we next analyze how AI may play out in the domains of digital, physical, and political security; we propose Interventions to assess these risks better, protect victims from attacks, and prevent malicious actors from accessing and deploying dangerous AI capabilities; and we conduct a Strategic Analysis of the “equilibrium” of a world in the medium-term (5+ years) after more sophisticated attacks, and defenses have been implemented. Appendices A and B respectively discuss the workshop leading up to this report and describe areas for research that might yield additional useful interventions.
The field of AI aims at the automation of a broad range of tasks. Typical tasks studied by AI researchers include playing games, guiding vehicles, and classifying images. In principle, though, the set of tasks that could be transformed by AI is vast. At a minimum, any task that humans or non-human animals use their intelligence to perform could be a target for innovation. While the field of artificial intelligence dates back to the 1950s, several years of rapid progress and growth have recently invested it with greater and broader relevance. Researchers have achieved sudden performance gains at a number of their most commonly studied tasks.
CALL TO ACTION
NEW AMERICA PUBLICATIONS
Assoicate Press / Politico / Original Source Article
Angela Merkel / Syrian Refugees /German Chancellor / Josef Janning / European Council on Foreign Relations / Berlin / European Security / Brexit / Greece Financial Crisis / Person of the Year / Politico / Mathew Karnitsching
July 1st, 2020