Recreating Cybercloud Safeguarding Today

Cyber Security Blog
Blog with us, and Navigate the Cyber Secrets with Confidence!

We are here for you, let us know what you think

13.10.25

Prompt Injection: a simple explanation for busy people

Prompt Injection: Plain-English guide 👇

A prompt injection is when someone sneaks instructions into text that an AI model reads - causing the model to ignore its original rules and do something it shouldn’t. Think of it like a cleverly worded detour sign that makes the AI takes a wrong turn.
 (NJP 2025)

What exactly is “prompt injection”?

Prompt injection is a tactic where attackers craft input (a message, a web page, a PDF or other documents, even hidden text) that overrides the AI’s intended behavior. The model then leaks data, executes unintended actions, or produces misleading output because it treats the injected text as higher-priority instructions. This can happen with direct prompts the user types or indirect prompts buried in external content the AI ingests.

Why should an organization care?

  • Data exposure: AI may reveal confidential info (PII, system prompts, credentials, source content). 

  • Unauthorized actions: If the AI can call tools/APIs, injected prompts may trigger emails, file operations, or risky workflow steps. 

  • Brand & compliance risk: Hallucinated or manipulated outputs can misinform customers, violate policies, or create audit findings. 

  • Supply-chain knock-on effects: Compromised plugins, connectors, or data sources can propagate malicious instructions into multiple apps.

What’s the risk to an individual user?

  • Privacy loss: Attackers can trick the model into recalling prior chat content or personal details the user provided. 

  • Fraud & social engineering: Poisoned outputs can steer users to phishing links or bad decisions that appear “AI-approved.” 

  • Reputation & errors: A junior analyst copying AI output into email or code can spread falsehoods or vulnerable snippets. 


What typically causes prompt injections?

  1. Trusting user text as instructions (no separation between “data” and “directives”).

  2. Indirect prompt sources like websites, PDFs, knowledge bases, and tickets that the AI reads automatically.

  3. Insufficient output handling (treating model text as safe to render, click, or execute). 

  4. Over-privileged tool access (the AI can perform powerful actions with little control). 

Fastest ways to reduce the risk (do these first)

For product owners / platform teams

  • Partition “instructions” from “data.” Use strict system prompts and message roles; never let external content change the AI’s core rules. 

  • Guard RAG & browsing.

    • Allow-list trusted domains and repositories.

    • Strip or neutralize markup, hidden text, and “system-like” phrases before retrieval.

    • Summarize sources rather than pasting raw content into the prompt. 

  • Validate model output before acting. Treat AI text as untrusted: sanitize, escape, and require human or policy checks before any action (click, execute, send, write to DB). 

  • Least privilege for tools/APIs. Scope tokens, rate-limit, add transaction guards (“are you sure?”), and require approvals for sensitive actions. 

  • Detection & monitoring. Log prompts/outputs, flag patterns (e.g., “ignore previous instructions”), and red-team with known injection strings during CI/CD. 

For security & governance

  • Adopt OWASP LLM Top 10 controls. Map your AI apps to LLM01 (Prompt Injection) and related risks (e.g., Sensitive Information Disclosure), then document mitigations. 

  • Policy & training. Publish short usage rules: do not paste secrets, verify links, and never execute code solely because the AI suggested it. 

For end users (fast hygiene wins)

  • Don’t paste sensitive data unless it’s explicitly approved.

  • Be skeptical of outputs that urge urgency, secrecy, or “ignore previous instructions.”

  • Confirm critical steps (money, credentials, production changes) with a second channel or a human. 


A simple mental model for juniors

  • Data is not instructions. Anything the AI reads might try to boss it around.

  • AI output is not truth. Treat it like a smart intern’s draft review before you act.

  • Power needs brakes. The more tools the AI can use, the more guardrails you must add. 


The Bottom line

Prompt injection is LLM risk No. 1 because it exploits the very thing that makes AI useful its responsiveness to natural language. Start by separating instructions from data, treating AI output as untrusted, locking down tool access, and adopting OWASP LLM Top 10 controls. These steps deliver the fastest, most meaningful drop in risk for both organizations and individual users. 



 - - - - - - - - - - - 

FAQ

  • Is this the same as “jailbreaking”?
    Related but different: jailbreaking tries to bypass safety rules via user prompts; prompt injection also includes hidden or indirect instructions from external content. 

  • Can prompt injections be invisible?
    Yes. They can be embedded in code comments, HTML, PDFs, or metadata that humans might not notice - but the model parses. 


Sources used:

  1. OWASP GenAI Security Project LLM01: Prompt Injection and LLM Top 10 (2023–2025). 
  2. Palo Alto Networks Cyberpedia: What Is a Prompt Injection Attack? and What Is AI Prompt Security?

4.10.25

Rise in AI Trends for Cyber Defense Services

The writer is a Cyber risk expert and researcher in Law and technology trends: NJ passi

The Digital Arms Race and the Need for Balance 👇

In the current digital era, where information systems are the lifeblood of businesses, governments, and critical infrastructure, cyber attackers (Black Hats) leverage artificial intelligence (AI) to enhance the efficiency of their attacks. AI-based tools enable them to identify code vulnerabilities, generate personalized attacks, and adapt strategies at an astonishing speed. However, the scalable counter-solution is the development of AI systems that empower human capabilities on the defense front: accurate vulnerability detection, high-quality fix suggestions, and acceleration of analysis processes in complex environments like Security Operations Centers (SOCs). This post, based on current trends and up-to-date research, examines how AI systems are becoming an essential tool for organizational cyber defenders. From organizational security teams to security researchers and maintainers of open-source software, as well as risk managers shaping long-term defense strategies, all require these capabilities. I will focus on the rationale visible today, with an emphasis on investments in development and their impact on the field, including changes in workforce structure in the industry, as seen in current trends.


👉 Directions of LLM Companies and Security Companies

Large AI companies (LLMs) like Anthropic are leading the shift to AI-based cyber defense, focusing on specific defensive tasks. In their latest article, Anthropic introduced Claude Sonnet 4.5, an AI model specializing in code vulnerability detection, fix creation, and network analysis, while avoiding any enhancements that favor offensive activities like writing malicious software. (https://red.anthropic.com/( The model achieves faster and more comprehensive results than humans; for example, it solved CTF (Capture-the-Flag) challenges in just 38 minutes, compared to an hour or more for human experts. The model detects new vulnerabilities in 33% of open-source code projects.

This is part of a broader trend where LLM companies are investing in defensive research to balance the advantage attackers gain from AI systems, as seen in disruptions created by Anthropic against cyber operations using AI for data fraud or espionage.

This trend is spreading to additional AI companies. For example, Google launched "A Summer of Security" in July 2025, an initiative including the Big Sleep agent for faster code vulnerability detection and the Google Unified Security platform that integrates data checking, threat intelligence, unified SOC, and AI-based automation. OpenAI, for its part, published a report in June 2025 on disruptions it created against malicious uses of its AI model, including collaboration with the U.S. Department of Defense to enhance AI capabilities in cyber defense. This defense emphasizes preventing AI exploitation by authoritarian regimes.

These companies are partners in the trend of focusing on defensive development, while integrating AI into existing tools to empower cyber defenders and information security.

At the same time, traditional security companies are integrating LLMs and AI into SOC management systems to achieve maximum control over incident analysis. For example, Palo Alto Networks completed the acquisition of IBM's QRadar SaaS assets in 2024, strengthening its Cortex XSIAM platform through integration of advanced SIEM capabilities.

This acquisition advanced SOC capabilities to address new issues like advanced AI threats and automation in threat detection, making Palo Alto a key player in the market. Not only due to internal AI development but also seamless integration with existing systems, enabling major wins already in 2025. Splunk, which currently dominates the SOC systems market as a leader in SIEM, emphasized in its State of Security 2025 report the need for a smarter SOC.

59% of organizations report that AI systems improve SOC efficiency. Along with automation of threat detection and reduction of alert fatigue states, while integrating platforms like Cisco Data Fabric. This, through machine learning integration for real-time identification of important security events. This trend is based on a practical need for AI systems that enable faster and more comprehensive analysis than a professional human and reduce incident response time by approximately 44%, in cases as examined in HackerOne cyber incidents.


👉 Use Cases - AI as a Human Empower

AI does not replace organizational cyber defenders but empowers them in specific tasks. Here are examples based on current implementations:

  1. Vulnerability Detection and Fixing in Code - In the DARPA AI Cyber Challenge, teams used LLM models like Claude to analyze millions of lines of code, identify new vulnerabilities, and create fixes, including those integrated into open-source software. AI scans code at a high scale, offers precise solutions, and reduces fix time from days to just a few hours.
  2. SOC Automation, Real-Time Threat Detection - CrowdStrike uses the Falcon AI platform to detect anomalous behaviors in endpoints, cloud access, and data, and responds automatically to threats. For example, it analyzes network traffic and dismantles malicious software, with a 76.5% success rate in Cybench challenges, double that of previous models. This allows SOC teams to focus on strategy instead of manual analysis.
  3. Organizational Risk Management, Vulnerability Exploitation Prediction - Microsoft Security Copilot uses AI to predict which vulnerabilities will be exploited based on trends and offers tailored fixes. For open-source maintainers, Darktrace provides behavioral analysis that detects vulnerabilities in WiFi and cloud systems, while providing repair recommendations.
  4. Incident Response - Triage Automation - SentinelOne integrates AI for zero-day detection and automatic response, including endpoint isolation. This reduces damage by 50% on average.

👉 Mapping of Leading Global Companies - Impact and Investments

Investments in AI for cyber defense surged in 2025, with a forecast of 5-7 trillion dollars in global economic trends. AI is becoming a leading investment target in security budgets. 74% of organizations report seeing positive impact from AI technologies in their organization (www.pwc.com). This highlights that AI systems are the top investment priority, also to address workforce shortages and increase operational efficiency. Below is a table mapping key global companies:


See table:

Company

AI Focus

Example of Impact

Investments/Trends 2025

Anthropic (Claude)

Vulnerability detection and code fixing

Partnership with CrowdStrike and HackerOne, 44% reduction in response time

Investment in defensive research, AI-based threat disruptions

Palo Alto Networks (Cortex XSIAM)

SOC automation and threat detection

QRadar SaaS acquisition, automatic alert enrichment, AI model protection

Dominance in AI-security market, 30% growth in AI investments, major SIEM wins

CrowdStrike (Falcon)

EDR and behavioral analysis

Cloud threat detection and AI workloads, 76.5% success in CTF

Native AI platform, investments in AI security on AWS

Darktrace

Behavioral analysis and prediction

Azure protection, anomaly detection in data

Leading predictive AI, partnerships with Microsoft

SentinelOne

Endpoint protection and automated response

Zero-day detection, cloud identity management

8 leading AI-security companies, growth in AI EDR

Microsoft (Security Copilot)

Risk prediction and fixing

Integration with Azure, vulnerability trend analysis

Top investment target, 55% IT efficiency improvement

Google (Big Sleep & Unified Security)

Vulnerability detection and unified SOC

"Summer of Security" initiative, AI automation

Investments in defensive AI, Growth Academy for expansion

Splunk

SIEM and smart SOC

State of Security 2025, tier-1 automation

SOC market dominance, 59% AI efficiency improvement

OpenAI

Disruption of malicious uses

June 2025 report, DoD collaboration

Focus on preventing AI threats, built-in security

Investment trends are emphasizing a shift to unified AI platforms, with a focus on SOC automation and protection of AI itself, as seen in organizations investing in customer support and IT efficiency improvements through the use of AI.


👉 Change Index - AI Efficiency vs. Humans and Workforce Structure Changes

As investments in AI development for cyber defense grow—with AI as the top budget priority—the change index becomes dramatic: 56% of organizations report improvement in threat prioritization capabilities, and 51% in enhanced SOC efficiency. AI systems are more efficient than humans in several metrics: analyzing massive data volumes in real-time (e.g., Google's Big Sleep detects vulnerabilities several times faster than a human expert), reducing human errors by 30-50%, and automating tier-1 tasks. The system responds to threats on a global scale without signs of fatigue in detection. However, AI requires human oversight for complex strategies. (https://mixmode.ai))

This change will profoundly impact the future workforce structure in cyber defense and information security departments. 52% of experts predict impact on entry-level hiring, with automation of basic tasks freeing analysts to focus on tier-2/3 (deep investigation and strategy) (www.isc2.org). Splunk reports that its SOC automated tier-1 without layoffs, but by reallocating workforce to higher-priority tasks – increasing efficiency by 43%. However, 46% of employees fear job loss, and 50% are concerned about AI accuracy risks. Organizations that invest more will see a shift to an "AI-savvy" workforce – experts combining AI with human judgment – which will reduce talent shortages by 30% and improve threat response by 55%.


👉 AI as a Partner in "Scalable Defense"

Building AI for cyber defense is not futurism; it is a current reality that balances the arms race. By empowering defenders through accurate vulnerability detection, SOC automation, and fix suggestions, we enable security teams, researchers, and risk managers to focus on implementing organizational defense strategy. The following investments in the field, around 5 trillion dollars and the expected impact, indicate acceleration in development in the field, but the emphasis must be on data-based implementations, as led by Anthropic, Google, Palo Alto, and Splunk. That is, not investing in unproven futuristic technologies, but focusing on practical applications based on real data: research, experiments, and measurable metrics.


Copyrights: isc2.org

21.8.25

Cyber ​​Threats to the Israeli Healthcare System - 2025 🚨: What Every Manager Must Know

 Cyber ​​Threats 

🚨 Healthcare System - 2025🚨

What Every Manager Must Know

Are you ready for the critical cyber challenges that await the Israeli healthcare system in 2025?

The Disturbing Reality: 24% Increase in Cyber ​​Incidents 📈

Fresh data from the Ministry of Public Security shows a 24% increase in reported cyber incidents in 2024, with the healthcare system at the forefront of the targets. This is not just another statistical report - this is a reality that directly affects the continuity of patient care.


7 Critical Cyber ​​Threats Threatening the Israeli Healthcare System:

🎯 1. State Cyber ​​War

  • APT Groups Targeting the Israeli Healthcare System Specifically
  • Paralyzing Hospital Networks in Times of Crisis
  • Damaging Emergency Coordination Systems


💰 2. Advanced Digital Ransomware

  • Moving from Encryption to Data Theft + Extortion
  • 238 Ransomware Threats in Global Healthcare Systems in 2024
  • Prolonged Disruption to Patient Services


🔓 3. Data Theft and Access Credentials

  • A Sharp Increase in Password Stealing Software
  • Breach of Health Insurance Systems
  • Damage to Sensitive Patient Information


📱 4. Attacks on Medical Devices

  • Exploiting Connected Medical Devices as an Entry Point
  • Vulnerabilities in Israeli PLC Systems (Unitronics)
  • Infiltration Through Devices with Weak Security


☁️ 5. Cloud Threats

Incorrect Configurations in Cloud EHR Systems

Weak Identity Controls

Exploiting Medical Information Sensitive


🤖 6. Artificial Intelligence Threats

Using AI for sophisticated phishing attacks

Poisoning medical models

Forgery of doctor and patient identities


🔗 7. Supply chain threats

Overreliance on external suppliers

One breach = multiple system shutdown

Lack of control over suppliers


 The new obligations: Amendment 13 to the Privacy Protection Law 📋

Starting August 14, 2025 - every healthcare organization must appoint a Data Protection Officer (DPO). Are you ready?


🎯 Why is this report different?

✅ Intelligence-based analysis from relevant public websites

✅ Focus on Israel - specific threats to the geopolitical situation

✅ Practical recommendations for reducing risks

✅ Alignment with Ministry of Health requirements

✅ Lessons from the CrowdStrike incident that disabled dozens of hospitals in Israel

 

💡 What will you find in the full report?

Detailed Threat Map for 2025

  • Defense Strategies Tailored to the Israeli Healthcare System
  • Intelligence Insights from the Field
  • Implementation Guide for Ministry of Health Requirements
  • Documented Source List for More Information

📥 Download the Full Risk Report Now

"Healthcare Cyber ​​Threats Matter 2025 - Israel Focus"

By Nir Jonathan Passi - Cyber ​​Due Diligence


📊 The full report includes:

  • In-depth analysis of each threat
  • Implementation recommendations specific to Israel
  • Practical tools for risk assessment
  • A guide to meeting regulatory requirements


⚡ In a world where one cyber threat can paralyze an entire hospital - knowledge is your best defense.


Don't wait for the next breach. Prepare now...

15.8.25

איומי סייבר על תעשיית הבריאות בישראל - 2025: מה שכל מנהל חייב לדעת

איומי הסייבר על מערכת הבריאות בישראל - 2025🚨: מה שכל מנהל חייב לדעת

האם אתם מוכנים לאתגרי הסייבר הקריטיים שמחכים למערכת הבריאות הישראלית בשנת 2025?

המציאות המטרידה: עלייה של 24% באירועי סייבר 📈

נתונים טריים מהמשרד לביטחון הפנים מראים עלייה של 24% באירועי סייבר שדווחו ב-2024, כאשר מערכת הבריאות נמצאת בחזית המטרות. זה לא עוד דוח סטטיסטי - זו מציאות שמשפיעה ישירות על המשכיות הטיפול בחולים.

7 איומי הסייבר הקריטיים שמאיימים על מערכת הבריאות הישראלית:

🎯 1. מלחמת סייבר ממלכתית

  • קבוצות APT מכוונות ספציפית למערכת הבריאות הישראלית
  • שיתוק רשתות בית חולים בזמני משבר
  • פגיעה במערכות תיאום חירום

💰 2. כופרות דיגיטליות מתקדמות

  • מעבר מהצפנה לגניבת מידע + סחיטה
  • 238 איומי כופרה במערכות בריאות גלובליות ב-2024
  • הפרעה ממושכת לשירותי חולים

🔓 3. גניבת מידע ואישורי גישה

  • עלייה חדה בתוכנות לגניבת סיסמאות
  • פריצה למערכות ביטוח בריאות
  • פגיעה במידע רגיש של מטופלים

📱 4. התקפות על מכשירים רפואיים

  • ניצול מכשירים רפואיים מחוברים כנקודת כניסה
  • פגיעות במערכות PLC ישראליות (Unitronics)
  • חדירה דרך מכשירים עם אבטחה חלשה

☁️ 5. איומי ענן

  • תצורות שגויות במערכות EHR ענניות
  • בקרות זהות חלשות
  • חשיפת מידע רפואי רגיש

🤖 6. איומי בינה מלאכותית

  • שימוש בAI להתקפות פישינג מתוחכמות
  • הרעלת מודלים רפואיים
  • זיוף זהות של רופאים ומטופלים

🔗 7. איומי שרשרת אספקה

  • תלות יתר בספקים חיצוניים
  • פריצה אחת = שיתוק מערכות מרובות
  • חוסר בקרה על ספקים

📋 החובות החדשות: תיקון 13 לחוק הגנת הפרטיות

החל מ-14 באוגוסט 2025 - כל ארגון בריאות חייב למנות קצין הגנת מידע (DPO). האם אתם מוכנים?

🎯 למה הדוח הזה שונה?

ניתוח מבוסס מודיעין מאתרים ציבוריים רלוונטיים
✅ התמקדות בישראל - איומים ספציפיים למצב הגיאו-פוליטי
✅ המלצות מעשיות להקטנת סיכונים
✅ יישור עם דרישות משרד הבריאות
✅ לקחים מאירוע CrowdStrike שהשבית עשרות בתי חולים בישראל

💡 מה תמצאו בדוח המלא?

  • מפת איומים מפורטת לשנת 2025
  • אסטרטגיות הגנה מותאמות למערכת הבריאות הישראלית
  • תובנות מודיעיניות מהשטח
  • מדריך יישום לדרישות משרד הבריאות
  • רשימת מקורות מתועדת למידע נוסף

📥 הורידו את הדוח המלא עכשיו

"Healthcare Cyber Threats Matter 2025 - Israel Focus"
מאת ניר יהונתן פסי - Cyber Due Diligence

📊 הדוח המלא כולל:

  • ניתוח מעמיק של כל איום
  • המלצות יישום ספציפיות לישראל
  • כלים מעשיים להערכת סיכונים
  • מדריך לעמידה בדרישות רגולטוריות

⚡ בעולם שבו איום סייבר אחד יכול לשתק בית חולים שלם - הידע הוא ההגנה הטובה ביותר שלכם.

אל תחכו לפריצה הבאה. התכוננו עכשיו...


How to build a cyber security controls methodology

 How do you build a 📌 security/controls methodology🔐 that works for any organization?

After years of working with complex cyber risk management methodologies, I decided to think outside the box 💡 and build something simpler - but no less effective.


📈 When I need to adapt a methodology to an organization, I usually start with a risk management framework (such as CIAAN), and instead of approaching it only through traditional risk management, I create a threat map and build an appropriate control structure. Here I developed a methodology based on the 12 Pillar’s, which I developed for the information protection architects of a large healthcare organization, and as I did, each organization can adapt to its unique needs. 👍🏻


Why 12? 🎓

The number 12 carries a meaning of completeness and order in many cultures 🖖🏻 - 12 tribes, 12 messengers, 12 months 📅, 12 hours 🕓, 12 zodiac signs 🏹. It represents a foundation for stability and integrity, exactly what we are looking for in information security. 📓


The 12 key pillars for reducing cyber risks:


📍 Authentication - Identifying and validating user identities

📍 Authorization - Defining permissions and approaches

📍 Encryption - Protecting information at rest and in motion

📍 Network Security - Protecting the communication infrastructure

📍 Endpoint Security - Secure devices and connections

📍 API Security - Protecting software interfaces

📍 SSDLC and container security - Security at the development level

📍 Vulnerability Management - Identifying and addressing weaknesses

📍 Supply Chain and Third-Party Controls - Protecting against suppliers

📍 Auditing and Compliance - Compliance with standards and regulations

📍 Incident Response - Preparedness to handle security incidents

📍 Disaster Recovery and BCP - Business Continuity


The advantage of this methodology:

✅ Simplicity - Easy to implement and understand

✅ Flexibility - Adaptable to any organization

✅ Comprehensive Coverage - Covers all aspects of security

✅ Practicality - Focuses on applicable controls

This methodology helps organizations build a customized security strategy without getting into the unnecessary tangle of complex frameworks.


🤏🏻 What do you think of this approach? 🤷🏻 How do you build the security methodology in your organization?


For a detailed and effective reading, go to the document: https://lnkd.in/dE-Bbkiv 

13.7.25

DPO Ready: הכנה מלאה לתפקיד DPO בעקבות תיקון 13 לחוק הגנת הפרטיות

חשיבות ממונה הגנת הפרטיות - תיקון 13

🛡️ חשיבות ממונה הגנת הפרטיות (DPO)

⏰ חובת מינוי ממונה הגנ' הפרטיות מ- 15 באוגוסט 2024 - תיקון 13

📋 מהו תיקון 13 לחוק הגנת הפרטיות?

תיקון 13 לחוק הגנת הפרטיות מחייב ארגונים למנות ממונה הגנת הפרטיות (DPO) על פי סעיף 17ב. הממונה ישמש כאיש קשר מרכזי בין הארגון לרשות להגנת הפרטיות, ויהיה אחראי על יישום מדיניות הגנת הפרטיות ועל חיזוק מנגנוני הפיקוח הפנימיים.

💼 תחומי אחריות הממונה

👨‍💼
ייעוץ שוטף להנהלה
הדרכה וייעוץ מתמשך לצוותי ההנהלה בנושאי הגנת פרטיות
🔍
מיפוי סיכונים
זיהוי וניתוח סיכונים פוטנציאליים לפרטיות במערכות הארגון
📊
הערכות השפעה על פרטיות
ביצוע DPIA (Data Protection Impact Assessment) לפרויקטים חדשים
🛡️
פיתוח נהלי אבטחת מידע
יצירת ויישום נהלים מתקדמים להגנה על מידע אישי
⚙️
Privacy by Design
הטמעת עקרונות Privacy by Design ו-Privacy by Default
🤝
קשר עם הרשות
תיווך בין הארגון לרשות להגנת הפרטיות

🎯 למי מיועד המסמך?

הנהלה בכירהמנהלי סיכוניםצוותי IT/OTגורמי פרטיות בארגוןמועמדים לתפקיד DPO

🎯 יתרונות הטמעת המסמך

💰
חיסכון זמן וכסף
📈
שיפור המוניטין
⚖️
הפחתת סיכוני אי-ציות
😊
שביעות רצון לקוחות

📢 הפצת המסמך - חיונית להצלחה!

הפצת המסמך בקרב כלל המחלקות והתהליכים הארגוניים תאפשר יישום אחיד של חובות המינוי והמשימות החדשות של הממונה. הדרכה מקיפה תבטיח הטמעת נהלי עבודה תפעוליים תואמי רגולציה ותחזק את אמון בעלי העניין.

🚀 התכוננו לעתיד מוגן יותר - התחילו עכשיו!

17.12.23

The risks of AI are real, what risks does AI possess

 AI Risks can be mitigated by an international code of ethics - but there will always be an exception 

Colleague's discussion about the late risks coming out of AI


Yes, I answered a friend from work. I am familiar with the Netflix docuseries "Unknown" and its episode on AI, titled "Unknown: Killer Robots." It was released in July 2023 as part of a four-week event exploring various mysteries around the world.


The episode delves into the development of military AI-powered robots and the ethical concerns surrounding them. It follows soldiers and scientists involved in creating these "killer robots" while also featuring activists raising awareness about their potential dangers.


The ethical issues surrounding AI in warfare, biology, health, and governance have been familiar to me since 2011 (Killing Robots 2011). You can read my post from that time. I’ll be okay with sharing my knowledge, and thoughts or even engaging in a conversation with you.


In that way we started a long conversation, let me share with you some parts.


cyberdog AI ethics- secprof


Ethics


The ethical concerns surrounding AI and its potential use in warfare are complex and multifaceted. Here are some points we have decided that anyone must consider, by this specific order:

1. Autonomous Weapon Systems (AWS) - The use of AI-powered robots capable of making their own decisions about targeting and engagement is a major ethical concern. The episode "Unknown: Killer Robots" highlights this issue, showing the potential for unintended consequences and loss of human control. International debate and treaties are currently underway to regulate or even ban AWS, but progress is slow.

2. Bias and Discrimination - AI algorithms can perpetuate existing biases and lead to discriminatory targeting or profiling. This is particularly concerning in the context of war, where decisions about life and death are being made. Ensuring fairness and accountability in AI development and deployment is crucial.

3. Accountability and Responsibility - When AI systems make mistakes, who is responsible? This question becomes even more complex when dealing with autonomous weapons. Holding developers, commanders, or even the AI itself accountable remains a significant challenge.

4. Dehumanization and Escalation - The use of AI robots in warfare could further dehumanize conflict and lead to increased violence and escalation. The psychological and ethical implications of relying on machines to kill require careful consideration.

5. Politicization and Manipulation - The potential for AI to be used for political ends, such as targeting specific individuals or manipulating public opinion, is another serious concern. Safeguards and oversight mechanisms are needed to ensure responsible use of AI technology.

Regarding The question that came up in the conversation, will we soon see armies purchasing AI robots not for rescue or intelligence gathering but for killing with autonomous intelligence to use in war or to kill politicians for a coup?

  • Timeline for first use - It is difficult to predict with certainty when AI robots will be used in warfare. However, it is likely that their use will be gradual and incremental, starting with tasks like surveillance and logistics before moving to more complex and controversial roles like targeting and engagement. The episode "Unknown: Killer Robots" depicts a potential future scenario where AI robots are already in use, but this is not necessarily a realistic timeline.

  • Use by /against politicians - While the use of AI robots against specific individuals is technically possible, it is unlikely to be the first or most common application. The military would likely prioritize using AI for tasks that are deemed strategically beneficial, such as targeting enemy forces or infrastructure. However, the potential for misuse and abuse cannot be entirely discounted.

  • AI is a powerful tool - that can be used or misused for evil acts. As we develop and deploy AI technologies, we must carefully consider the ethical implications and ensure that they are used responsibly and humanely. Open and informed public discussion is essential to shaping the future of AI and preventing its misuse.

By focusing on the ethical considerations, monitoring the potential risks, and debate on it, we will create awareness shortly where AI is used to benefit humanity, and not to harm it. Let’s learn or debate a bit more.

 liability and accountability

You raise some excellent and concerning points my friend said, about the potential misuse of AI, particularly in the context of warfare. The issues of liability, accountability, and responsibility are indeed crucial and complex, but not has a  distant threat, But more near one…

So how can we approach Liability and Accountability, and who will be responsible:

  • States - In traditional warfare, governments and militaries are held accountable for their actions under international law and human rights conventions. However, with AI-powered weapons, the lines become blurred. Who is responsible if an autonomous drone makes a targeting mistake? The programmer, the commander who deployed it, or the AI itself? Without clear legal frameworks, attributing blame and seeking justice could be incredibly challenging.

  • Non-state - The rise of rogue actors and non-state groups capable of developing or acquiring AI weapons further complicates the issue. How do we hold them accountable if they operate outside the traditional legal system? This raises the specter of an unregulated arms race, with devastating consequences.

  • Corporates and organizations - Private companies developing AI technologies for military use also raise concerns. Should they be held liable for the misuse of their products? Can they be incentivized to develop and deploy AI responsibly? Striking a balance between innovation and ethical considerations is crucial.

Responsibility

Ultimately, the responsibility for preventing the misuse of AI lies with all of us. Governments must establish robust regulations and oversight mechanisms. Researchers and developers must prioritize ethical principles in their work. And individuals must remain informed and engaged in the debate about AI's role in society.

The urgency of the situation:

While predicting the exact timeline of AI misuse is impossible, several factors suggest it could be sooner rather than later, and they are:

  • Rapid advancements in AI technology, The pace of development in AI is astonishing. Capabilities that seemed futuristic just a few years ago are now within reach.

  • Accessibility of AI tools, The tools and knowledge needed to develop basic AI are becoming increasingly accessible, even for non-state actors.

  • The lack of international consensus, Despite ongoing discussions, there is no international consensus on regulating or banning autonomous weapons. This creates a dangerous vacuum that could be exploited.

Therefore, it is crucial to act with urgency and implement robust safeguards to prevent the misuse of AI before it's too late. We must not wait for a tragedy to occur before taking action.

Resources for further information and action:

cyborg AI ethics- secprof

Safeguarding AI misuse


The current landscape surrounding AI ethics is complex and somewhat daunting. While passionate groups are working against AI misuse, they face significant challenges against the rapid advancements and powerful interests driving AI development. and also they get the label of losers!


The EU's AI Act is a major step in the right direction, but it's not a silver bullet.


Okay let's try to  break-down this situation:


Positive signs


  • Growing public awareness, more people are becoming aware of the potential risks of AI and demanding responsible development. This is crucial for putting pressure on governments and corporations to act.


  • International efforts, the EU's AI Act is not alone. Other countries and organizations are also developing regulations and ethical frameworks for AI. This shows a growing international consensus on the need for action.


  • Technological advancements in safety, researchers are developing AI safety tools and techniques, such as explainable AI and adversarial training, to mitigate risks and prevent misuse.


Challenges remain


  1. Lack of global consensus, different countries have different priorities and approaches to AI regulation. This lack of unity creates loopholes and makes it harder to enforce standards.

  2. Powerful vested interests, companies and governments with significant investments in AI may resist regulations that hinder their profits or technological ambitions.

  3. Rapid technological advancements, AI is developing quickly, making it difficult for regulations to keep pace and address the latest threats.

  4. The complexity of AI, the AI systems are often complex and opaque, making it challenging to identify and address potential biases or vulnerabilities.


So, is the EU’s AI Act enough? 

No, but it's a significant step forward. 

What we can also do is: 


Support AI ethics organizations - Donate our time or resources to groups like the Future of Life Institute and the International Campaign to Stop Killer Robots.


Hold corporations accountable -  Demand transparency and ethical practices from our governments to regulate it, over companies developing AI technologies, in every field.


Educate ourselves and others - analyze and develop AI ethics create a road map for the potential risks, and share your knowledge with others.


Advocate for responsible AI policies - Contact our elected politicians and officials and urge them to support legislation that promotes ethical AI development.


We need to remember, that preventing an AI apocalypse won't be a one-time effort. It will require sustained pressure from individuals, organizations, and governments. By working together, we can ensure that AI is used for the good of humanity and not for scamming, gaining more power, and creating an era of apocalypse.


The future of AI is not predetermined yet. We have the power to shape it and make sure it benefits all of humanity.


Cyborg_health AI bot - secprof


What can we do for the benefit of health with AI?

Developing a code of ethics for AI in healthcare is crucial to ensure its responsible and beneficial use. Here are some suggestions for your field:

General Principles

  • Patient autonomy and informed consent, and Patients must be informed about the use of AI in their diagnosis and treatment, and have the right to refuse or opt out.

  • Beneficence and non-maleficence, the AI tools should be used to improve patient outcomes and avoid unnecessary harm.

  • Transparency and exploitability, the AI decision-making processes should be transparent and understandable to healthcare professionals and patients alike.

  • Fairness and non-discrimination, the AI algorithms must be designed and trained to avoid bias and discrimination based on race, gender, socioeconomic status, or other factors.

  • Privacy and security, the patient data used in AI development and deployment must be protected with robust safeguards.

  • Accountability and responsibility, Developers, healthcare providers, and institutions must be accountable for the use and outcomes of AI in healthcare.

Specific to Medical AI Tools

  • Clear role definition, define the intended role of the AI tool (diagnostic aid, decision support, etc.),the and ensure it does not replace human judgment and expertise.

  • Validation and testing, the AI tools must be rigorously tested and validated in clinical settings to ensure their accuracy, safety, and efficacy.

  • Human oversight and control, human healthcare professionals should always have the final say in any decision made with the help of AI.

  • Continuous monitoring and improvement AI models should be continuously monitored for bias, errors, and potential harm, and updated as needed.

  • Education and training Healthcare professionals need to be educated on the use of AI tools, their limitations, and how to interpret their outputs.

Adoption and Implementation

  • Involving stakeholders in a process of developing the code of ethics, including doctors, nurses, patients, ethicists, and AI developers.

  • Clear communication and education ensure all stakeholders understand the code of ethics and its implications for their work.

  • Incentivize compliance and implement mechanisms to promote and reward ethical use of AI in healthcare.

  • Regular review and updates, regularly review and update the code of ethics to reflect evolving technologies and practices.

Remember, a code of ethics is just a framework

The success depends on its implementation, enforcement, and adaptation to evolving technologies and contexts. Working together and continuously improving, may ensure that AI in healthcare benefits all patients and contributes to a more ethical and equitable healthcare system.

Additional Resources:

The end of a colleague discussion

Here we finished our discussion, and I started to write this article/post. From here it is in the hands of any reader. I gave you the knowledge, if it is something you share in your thoughts, help us to make a change and share this post with other friends, today!