Recreating Cybercloud Safeguarding Today


Blog with us, and Navigate the Cyber Jungle with Confidence!

We are here for you, let us know what you think

17.12.23

The risks of AI are real, what risks does AI possess

 AI Risks can be mitigated by an international code of ethics - but there will always be an exception 

Colleague's discussion about the late risks coming out of AI


Yes, I answered a friend from work. I am familiar with the Netflix docuseries "Unknown" and its episode on AI, titled "Unknown: Killer Robots." It was released in July 2023 as part of a four-week event exploring various mysteries around the world.


The episode delves into the development of military AI-powered robots and the ethical concerns surrounding them. It follows soldiers and scientists involved in creating these "killer robots" while also featuring activists raising awareness about their potential dangers.


The ethical issues surrounding AI in warfare, biology, health, and governance have been familiar to me since 2011 (Killing Robots 2011). You can read my post from that time. I’ll be okay with sharing my knowledge, and thoughts or even engaging in a conversation with you.


In that way we started a long conversation, let me share with you some parts.


cyberdog AI ethics- secprof


Ethics


The ethical concerns surrounding AI and its potential use in warfare are complex and multifaceted. Here are some points we have decided that anyone must consider, by this specific order:

1. Autonomous Weapon Systems (AWS) - The use of AI-powered robots capable of making their own decisions about targeting and engagement is a major ethical concern. The episode "Unknown: Killer Robots" highlights this issue, showing the potential for unintended consequences and loss of human control. International debate and treaties are currently underway to regulate or even ban AWS, but progress is slow.

2. Bias and Discrimination - AI algorithms can perpetuate existing biases and lead to discriminatory targeting or profiling. This is particularly concerning in the context of war, where decisions about life and death are being made. Ensuring fairness and accountability in AI development and deployment is crucial.

3. Accountability and Responsibility - When AI systems make mistakes, who is responsible? This question becomes even more complex when dealing with autonomous weapons. Holding developers, commanders, or even the AI itself accountable remains a significant challenge.

4. Dehumanization and Escalation - The use of AI robots in warfare could further dehumanize conflict and lead to increased violence and escalation. The psychological and ethical implications of relying on machines to kill require careful consideration.

5. Politicization and Manipulation - The potential for AI to be used for political ends, such as targeting specific individuals or manipulating public opinion, is another serious concern. Safeguards and oversight mechanisms are needed to ensure responsible use of AI technology.

Regarding The question that came up in the conversation, will we soon see armies purchasing AI robots not for rescue or intelligence gathering but for killing with autonomous intelligence to use in war or to kill politicians for a coup?

  • Timeline for first use - It is difficult to predict with certainty when AI robots will be used in warfare. However, it is likely that their use will be gradual and incremental, starting with tasks like surveillance and logistics before moving to more complex and controversial roles like targeting and engagement. The episode "Unknown: Killer Robots" depicts a potential future scenario where AI robots are already in use, but this is not necessarily a realistic timeline.

  • Use by /against politicians - While the use of AI robots against specific individuals is technically possible, it is unlikely to be the first or most common application. The military would likely prioritize using AI for tasks that are deemed strategically beneficial, such as targeting enemy forces or infrastructure. However, the potential for misuse and abuse cannot be entirely discounted.

  • AI is a powerful tool - that can be used or misused for evil acts. As we develop and deploy AI technologies, we must carefully consider the ethical implications and ensure that they are used responsibly and humanely. Open and informed public discussion is essential to shaping the future of AI and preventing its misuse.

By focusing on the ethical considerations, monitoring the potential risks, and debate on it, we will create awareness shortly where AI is used to benefit humanity, and not to harm it. Let’s learn or debate a bit more.

 liability and accountability

You raise some excellent and concerning points my friend said, about the potential misuse of AI, particularly in the context of warfare. The issues of liability, accountability, and responsibility are indeed crucial and complex, but not has a  distant threat, But more near one…

So how can we approach Liability and Accountability, and who will be responsible:

  • States - In traditional warfare, governments and militaries are held accountable for their actions under international law and human rights conventions. However, with AI-powered weapons, the lines become blurred. Who is responsible if an autonomous drone makes a targeting mistake? The programmer, the commander who deployed it, or the AI itself? Without clear legal frameworks, attributing blame and seeking justice could be incredibly challenging.

  • Non-state - The rise of rogue actors and non-state groups capable of developing or acquiring AI weapons further complicates the issue. How do we hold them accountable if they operate outside the traditional legal system? This raises the specter of an unregulated arms race, with devastating consequences.

  • Corporates and organizations - Private companies developing AI technologies for military use also raise concerns. Should they be held liable for the misuse of their products? Can they be incentivized to develop and deploy AI responsibly? Striking a balance between innovation and ethical considerations is crucial.

Responsibility

Ultimately, the responsibility for preventing the misuse of AI lies with all of us. Governments must establish robust regulations and oversight mechanisms. Researchers and developers must prioritize ethical principles in their work. And individuals must remain informed and engaged in the debate about AI's role in society.

The urgency of the situation:

While predicting the exact timeline of AI misuse is impossible, several factors suggest it could be sooner rather than later, and they are:

  • Rapid advancements in AI technology, The pace of development in AI is astonishing. Capabilities that seemed futuristic just a few years ago are now within reach.

  • Accessibility of AI tools, The tools and knowledge needed to develop basic AI are becoming increasingly accessible, even for non-state actors.

  • The lack of international consensus, Despite ongoing discussions, there is no international consensus on regulating or banning autonomous weapons. This creates a dangerous vacuum that could be exploited.

Therefore, it is crucial to act with urgency and implement robust safeguards to prevent the misuse of AI before it's too late. We must not wait for a tragedy to occur before taking action.

Resources for further information and action:

cyborg AI ethics- secprof

Safeguarding AI misuse


The current landscape surrounding AI ethics is complex and somewhat daunting. While passionate groups are working against AI misuse, they face significant challenges against the rapid advancements and powerful interests driving AI development. and also they get the label of losers!


The EU's AI Act is a major step in the right direction, but it's not a silver bullet.


Okay let's try to  break-down this situation:


Positive signs


  • Growing public awareness, more people are becoming aware of the potential risks of AI and demanding responsible development. This is crucial for putting pressure on governments and corporations to act.


  • International efforts, the EU's AI Act is not alone. Other countries and organizations are also developing regulations and ethical frameworks for AI. This shows a growing international consensus on the need for action.


  • Technological advancements in safety, researchers are developing AI safety tools and techniques, such as explainable AI and adversarial training, to mitigate risks and prevent misuse.


Challenges remain


  1. Lack of global consensus, different countries have different priorities and approaches to AI regulation. This lack of unity creates loopholes and makes it harder to enforce standards.

  2. Powerful vested interests, companies and governments with significant investments in AI may resist regulations that hinder their profits or technological ambitions.

  3. Rapid technological advancements, AI is developing quickly, making it difficult for regulations to keep pace and address the latest threats.

  4. The complexity of AI, the AI systems are often complex and opaque, making it challenging to identify and address potential biases or vulnerabilities.


So, is the EU’s AI Act enough? 

No, but it's a significant step forward. 

What we can also do is: 


Support AI ethics organizations - Donate our time or resources to groups like the Future of Life Institute and the International Campaign to Stop Killer Robots.


Hold corporations accountable -  Demand transparency and ethical practices from our governments to regulate it, over companies developing AI technologies, in every field.


Educate ourselves and others - analyze and develop AI ethics create a road map for the potential risks, and share your knowledge with others.


Advocate for responsible AI policies - Contact our elected politicians and officials and urge them to support legislation that promotes ethical AI development.


We need to remember, that preventing an AI apocalypse won't be a one-time effort. It will require sustained pressure from individuals, organizations, and governments. By working together, we can ensure that AI is used for the good of humanity and not for scamming, gaining more power, and creating an era of apocalypse.


The future of AI is not predetermined yet. We have the power to shape it and make sure it benefits all of humanity.


Cyborg_health AI bot - secprof


What can we do for the benefit of health with AI?

Developing a code of ethics for AI in healthcare is crucial to ensure its responsible and beneficial use. Here are some suggestions for your field:

General Principles

  • Patient autonomy and informed consent, and Patients must be informed about the use of AI in their diagnosis and treatment, and have the right to refuse or opt out.

  • Beneficence and non-maleficence, the AI tools should be used to improve patient outcomes and avoid unnecessary harm.

  • Transparency and exploitability, the AI decision-making processes should be transparent and understandable to healthcare professionals and patients alike.

  • Fairness and non-discrimination, the AI algorithms must be designed and trained to avoid bias and discrimination based on race, gender, socioeconomic status, or other factors.

  • Privacy and security, the patient data used in AI development and deployment must be protected with robust safeguards.

  • Accountability and responsibility, Developers, healthcare providers, and institutions must be accountable for the use and outcomes of AI in healthcare.

Specific to Medical AI Tools

  • Clear role definition, define the intended role of the AI tool (diagnostic aid, decision support, etc.),the and ensure it does not replace human judgment and expertise.

  • Validation and testing, the AI tools must be rigorously tested and validated in clinical settings to ensure their accuracy, safety, and efficacy.

  • Human oversight and control, human healthcare professionals should always have the final say in any decision made with the help of AI.

  • Continuous monitoring and improvement AI models should be continuously monitored for bias, errors, and potential harm, and updated as needed.

  • Education and training Healthcare professionals need to be educated on the use of AI tools, their limitations, and how to interpret their outputs.

Adoption and Implementation

  • Involving stakeholders in a process of developing the code of ethics, including doctors, nurses, patients, ethicists, and AI developers.

  • Clear communication and education ensure all stakeholders understand the code of ethics and its implications for their work.

  • Incentivize compliance and implement mechanisms to promote and reward ethical use of AI in healthcare.

  • Regular review and updates, regularly review and update the code of ethics to reflect evolving technologies and practices.

Remember, a code of ethics is just a framework

The success depends on its implementation, enforcement, and adaptation to evolving technologies and contexts. Working together and continuously improving, may ensure that AI in healthcare benefits all patients and contributes to a more ethical and equitable healthcare system.

Additional Resources:

The end of a colleague discussion

Here we finished our discussion, and I started to write this article/post. From here it is in the hands of any reader. I gave you the knowledge, if it is something you share in your thoughts, help us to make a change and share this post with other friends, today!